Commit Graph

67 Commits

Author SHA1 Message Date
decolua
3c96e8d6d1 Feat : tts 2026-04-10 10:17:53 +07:00
decolua
39545cf4c8 Fix combo modal 2026-04-09 10:16:11 +07:00
decolua
401772cb9a Fix bug strip image 2026-04-07 10:18:59 +07:00
decolua
307be3b63d Fix bug 2026-04-06 17:32:44 +07:00
decolua
57cfaccceb Fix ModelSelectModal 2026-04-05 21:25:00 +07:00
decolua
333e704b2a MODEL_CAPS 2026-04-04 23:24:24 +07:00
kwanLeeFrmVi
054facb08b fix(gemini): preserve thoughtSignature via tool_call ID smuggling + fix ELOCKED mutex
- Encode thoughtSignature into tool_call.id using _TSIG_ delimiter and base64url
- Decode _TSIG_ on request to restore thoughtSignature for Gemini multi-turn thinking
- Track pendingThoughtSignature across parts for deferred signature attachment
- Add LocalMutex (2-layer locking) to prevent ELOCKED on concurrent DB access
- Increase lockfile retries from 5 to 15 for multi-process robustness
- Restore db.json seed on first run to prevent ENOENT on lockfile.lock
- Use process.env.BASE_URL fallback in models test route
- Remove gemini-3-flash-lite-preview from provider models

Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes #450

Made-with: Cursor
2026-03-30 16:57:28 +07:00
decolua
abbf8ec86f feat: add GitLab Duo and CodeBuddy support, update observability settings 2026-03-30 11:28:07 +07:00
Ryan
56be393a59 feat: expand OpenAI and Gemini static model lists (#398)
OpenAI: add GPT-5.x series, GPT-4.1 variants, o3/o4 reasoning
models, embedding models, and TTS models (5 → 26 models).

Gemini: add 3.1 Flash Image, 3 Flash Lite, 2.0 Flash/Lite,
Embedding 2; remove deprecated 3 Pro Preview (10 → 14 models).

Closes #179, partially addresses #178.
2026-03-27 10:39:07 +07:00
Anurag Saxena
a0500dfc85 feat: add MiniMax M2.7 model support (#357)
Add MiniMax-M2.7 to provider models and pricing config alongside
existing M2.5. M2.7 is the latest reasoning model with 204K context.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 15:34:36 +07:00
Quan
39f651f5be feat: Add Google Cloud Vertex AI provider support (vertex, vertex-partner)
Co-authored-by: Quan <quanle96@outlook.com>
PR: https://github.com/decolua/9router/pull/298

Thanks to @kwanLeeFrmVi for the original implementation. Here is a summary
of changes made during review integration:

- Replaced google-auth-library with jose (already a project dependency)
  for SA JSON -> OAuth2 Bearer token minting (RS256 JWT assertion flow)
- Moved auth logic (parseSaJson, refreshVertexToken, token cache) from
  executor into open-sse/services/tokenRefresh.js to match project pattern
- Fixed executor to use proxyAwareFetch instead of raw fetch (proxy support)
- Simplified buildUrl: use global aiplatform.googleapis.com endpoint for
  both vertex (Gemini) and vertex-partner; removed region/modelFamily fields
- Added auto-detection of GCP project_id from raw API key via probe request
  (vertex-partner only, cached per key)
- Added vertex/vertex-partner cases to /api/providers/validate/route.js
- Updated model lists based on live testing:
  - vertex: gemini-3.1-pro-preview, gemini-3.1-flash-lite-preview,
    gemini-3-flash-preview, gemini-2.5-flash (removed gemini-2.5-pro: 404)
  - vertex-partner: deepseek-v3.2, qwen3-next-80b (instruct+thinking),
    glm-5 (removed Mistral/Llama: not enabled in test project)
  - gemini provider: added gemini-3.1-pro-preview, gemini-3.1-flash-lite-preview
- Removed bun.lock (project uses npm/package-lock.json)
- Removed region and modelFamily UI fields (global endpoint, auto-detect)
- Kiro token auto-refresh on AccessDeniedException (from commit 2)

Made-with: Cursor
2026-03-14 11:37:23 +07:00
decolua
b0c6b61398 Refactor config 2026-03-12 16:20:46 +07:00
decolua
83d94daa82 feat(ollama): Enhance Ollama support by adding new models, updating API format handling, and integrating translation functionality. 2026-03-12 15:24:10 +07:00
decolua
32e3980a13 feat(ollama): Add Ollama provider support with models and configuration, including API endpoints and UI updates. 2026-03-12 15:24:02 +07:00
decolua
fe49b61dfb feat: - Introduced per-provider strategy overrides in settings, allowing for more flexible connection management.
- Added new provider models: DeepSeek 3.1, DeepSeek 3.2, and Qwen3 Coder Next.
- Implemented UI changes to support round-robin strategy with sticky limits in the provider detail page.
- Improved logging to display connection names instead of IDs for better clarity.
2026-03-11 18:04:38 +07:00
Peter Steinberger
31775393e6 feat(iflow): sync model list with CLIProxyAPI
Made-with: Cursor
2026-03-10 16:38:32 +07:00
apeltekci
30e4689fb9 fix(cline): refresh static model catalog
Made-with: Cursor
2026-03-09 16:21:53 +07:00
decolua
758224749d Feat : Add support for the new "alicode-intl" provider 2026-03-07 10:08:55 +07:00
decolua
573b0f0241 - Refines the overall structure of the CLI tools and MITM server functionalities.
- Add buildQwenBaseUrl function to construct URLs for Qwen resources.
- Update buildProviderUrl to support Qwen model requests.
- Enhance token refresh logic to include provider-specific data for Qwen.
- Refactor CLI Tools page to exclude MITM tools and streamline model retrieval.
- Introduce new components for MITM server management.
- Update API routes to handle Qwen-specific resource URLs and improve error handling.
2026-03-05 11:25:03 +07:00
decolua
5954b8f4eb - Refactor chatCore.js to streamline imports and remove unused functions.
- Fix streaming /v1/responses
2026-02-27 11:15:12 +07:00
decolua
1f4423d444 Merge remote-tracking branch 'origin/master' 2026-02-27 09:35:39 +07:00
decolua
0e285a9ed3 Merge branch 'pr-203' 2026-02-27 09:33:14 +07:00
司徒玟琅
4527e5e126 feat: update provider models(Cursor IDE) with new versions (#204)
- Added new provider models: Claude 4.6 Opus Max, Claude 4.6 Sonnet Medium Thinking, Kimi K2.5, Gemini 3.1 Pro Preview, Gemini 3 Flash Preview, GPT 5.2, and GPT 5.3 Codex.
2026-02-27 09:09:20 +07:00
BiuBiu_Hu
d14c18f77f refactor: rename provider to alicode (Aliyun Coding)
Rename alicloud to alicode to clearly indicate Aliyun's Coding Plan service.

- Provider ID: alicode (short for Aliyun Coding)
- Model format: alicode/qwen3.5-plus
- Simplified mapping - no more bidirectional aliases

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-26 08:05:05 +08:00
BiuBiu_Hu
b0ec81f4a5 feat: add Alibaba Cloud Coding Plan support
Add support for Alibaba Cloud Bailian Coding Plan, a coding-focused AI service
that provides fixed monthly pricing for multiple models.

Changes:
- Add alicloud provider with OpenAI-compatible API endpoint
- Support 8 models: qwen3.5-plus, kimi-k2.5, glm-5, MiniMax-M2.5,
  qwen3-max, qwen3-coder-next, qwen3-coder-plus, glm-4.7
- Use "ali" as provider alias (ali/model format)
- Add API key validation and connection testing
- Add frontend provider definition with "ALi" text icon

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-26 07:58:40 +08:00
Quan
07717bad60 feat: cherry-pick PR #183 — multi-provider support, PWA, dynamic models, UI improvements
Cherry-picked from decolua/9router PR #183.
Note: open-sse changes included but need further review due to extensive modifications.

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-25 11:40:50 +07:00
decolua
a5eb5a864e chore: add Gemini 3.1 Pro models to provider configurations 2026-02-22 15:20:24 +07:00
decolua
930e917092 chore: update version and enhance provider model configurations. 2026-02-22 11:30:43 +07:00
decolua
f2025cc776 feat: add Gemini 3.1 Pro models to provider 2026-02-20 21:05:02 +07:00
decolua
985985e454 refactor: update Antigravity model configurations and pricing 2026-02-20 17:52:15 +07:00
decolua
3debf84b9a Add Providers 2026-02-20 17:05:46 +07:00
Hồ Xuân Dũng
a57a8ce206 feat: add Gemini embeddings support + Letta compatibility fixes
Cherry-picked from decolua/9router#148 (author: xuandung38 / Hồ Xuân Dũng <me@hxd.vn>)

- Add Google AI (Gemini) embeddings support for /v1/embeddings endpoint
- Add Gemini embedding models: gemini-embedding-001, text-embedding-005, text-embedding-004
- Inject missing object/created fields for Letta and strict OpenAI clients
- Strip Azure-specific fields (prompt_filter_results, content_filter_results) from responses
- Fix Dockerfile: copy open-sse directory into Docker runner stage

Skipped: whitelist message field stripping (commit 3/7/8) — too aggressive for all providers
Skipped: default stream=false change (commit 9) — behavior change needs further review
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-20 15:01:10 +07:00
decolua
e1e5a81613 feat: add GLM 5 and MiniMax M2.5 models to providerModels.js; add Claude Sonnet 4.6 to CLI tools 2026-02-20 14:44:53 +07:00
zx07
f933dd9c61 feat: add Qwen3.5 Coder Model configuration (#156)
Co-authored-by: zx <me@char.moe>
2026-02-19 21:55:11 +07:00
EdamAmex
c4aa4247bd feat: Add GPT 5.3 Codex to GitHub Copilot (#150)
* Add GPT-5.3 Codex model to providerModels.js

* Add pricing constants for gpt-5.3-codex
2026-02-19 12:10:34 +07:00
すずねーう
4e2a3f888c feat: Add Claude Sonnet 4.6 to GitHub Copilot (#149)
* Add Claude Sonnet 4.6 to GitHub Copilot

Claude Sonnet 4.6 is available in GitHub Copilot now.
https://github.blog/changelog/2026-02-17-claude-sonnet-4-6-is-now-generally-available-in-github-copilot/

* Add pricing constants for Claude Sonnet 4.6 for GitHub Copilot
2026-02-19 07:59:44 +07:00
decolua
b057c43c27 feat(open-sse): add Claude Sonnet 4.6 2026-02-18 13:31:32 +07:00
zx07
c7d44101b5 feat: add GPT 5.3 Codex Spark model to pricing and provider models (#133)
Co-authored-by: zx <me@char.moe>
2026-02-16 12:31:12 +07:00
decolua
3e4ca1889f - Add new model "minimax-m2.5" to providerModels. 2026-02-15 13:03:32 +07:00
zx07
03ab554d1c feat: add support for GLM 5 (if) (#123)
(cherry picked from commit e26d65aa55726e330f6806aa1abfe05ac6801619)

Co-authored-by: zx <me@char.moe>
2026-02-13 19:37:13 +07:00
Blade
9c9af25acd Fix/minimax cn cant use in combo (#107)
* fix(provider): correct minimax-cn to alias mapping

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
(cherry picked from commit 315c6fc91b06584b101a4078affef3bb3b0f7001)

* fix(provider): add minimax-cn model list to PROVIDER_MODELS

(cherry picked from commit 15bc2d070306f48da4887a6286ec2a6007300705)

---------

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-02-11 19:58:33 +07:00
Blade096
bd23ab41ee feat(iflow): add IFlowExecutor with HMAC-SHA256 signature and enable models
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 15:32:22 +07:00
decolua
e3dbd448af feat: add GPT-3.5 Turbo to GitHub Copilot provider 2026-02-11 15:29:21 +07:00
decolua
6ade8ef39a feat: add GPT-4 to GitHub Copilot provider 2026-02-11 15:27:26 +07:00
decolua
053e490bb5 feat: add GPT-4o mini to GitHub Copilot provider 2026-02-11 15:22:25 +07:00
Bexultan
c090bb01b2 feat: add GPT 4o to GitHub Copilot provider (#98) 2026-02-11 07:19:22 +07:00
Bexultan
553346b522 Fix incorrect model ID for Raptor Mini in GitHub provider (#96) 2026-02-11 07:18:13 +07:00
Bexultan
3d605977b3 feat: add Claude Opus 4.6 to GitHub Copilot provider (#97) 2026-02-11 07:17:21 +07:00
Bexultan
4ea9a9da1c Fix: incorrect Gemini 3 Flash ID for GitHub provider (#94) 2026-02-10 19:45:49 +07:00
Bexultan
d36bd63e28 Fix: incorrect Gemini 3 Pro ID for GitHub provider (#95) 2026-02-10 19:19:08 +07:00