289 Commits

Author SHA1 Message Date
decolua
758224749d Feat : Add support for the new "alicode-intl" provider 2026-03-07 10:08:55 +07:00
decolua
d347de8092 feat: enhance translator functionality and UI 2026-03-06 16:26:33 +07:00
decolua
75f486b7a2 Added profile ARN handling in OAuth provider mapping and improved polling logic in OAuth modal for better user experience. 2026-03-06 00:21:27 +07:00
decolua
573b0f0241 - Refines the overall structure of the CLI tools and MITM server functionalities.
- Add buildQwenBaseUrl function to construct URLs for Qwen resources.
- Update buildProviderUrl to support Qwen model requests.
- Enhance token refresh logic to include provider-specific data for Qwen.
- Refactor CLI Tools page to exclude MITM tools and streamline model retrieval.
- Introduce new components for MITM server management.
- Update API routes to handle Qwen-specific resource URLs and improve error handling.
2026-03-05 11:25:03 +07:00
Rodrigo Rodrigues Costa
40a53fbd33 Fix: Codex image support - convert image_url to input_image format (#236)
Cursor sends images as Chat Completions format:
  { type: "image_url", image_url: { url: "data:...", detail: "auto" } }

But Codex Responses API requires:
  { type: "input_image", image_url: "data:..." }

- openai-responses.js: bidirectional conversion image_url <-> input_image
- responsesApiHelper.js: input_image -> image_url in Responses->Chat path
- codex.js: safety net conversion in executor before sending to Codex API

Note: Cursor has a known bug where images bypass the Override OpenAI Base URL
and are sent directly to api.openai.com. This fix is effective for other clients
(curl, Codex CLI, Claude Code) that route through the proxy correctly.

Made-with: Cursor
2026-03-05 10:31:50 +07:00
decolua
11b2fcd643 Fix Antigravity OAuth 2026-03-03 16:01:10 +07:00
decolua
07d4cdfa7e Fix : Claude OAuth 2026-03-03 14:46:05 +07:00
decolua
4903a9b2cb Feat : console log 2026-03-02 09:31:16 +07:00
decolua
50990e84b4 Fix AG MITM 2026-03-01 18:40:55 +07:00
Владимир Акимов
7076108550 fix(translator): filter nameless hosted tools when converting Responses API to Chat format (#222)
Codex CLI sends "hosted" tools (e.g. `request_user_input`) via the OpenAI
Responses API. These tools have no explicit `name` field. The previous
`body.tools.map()` pass propagated `name: undefined` into the resulting
Chat Completions function declarations, which then became anonymous
`functionDeclarations` after the OpenAI→Gemini translation step.

Gemini strictly requires every function declaration to have a valid name
and rejects the entire request with:

  GenerateContentRequest.tools[0].function_declarations[4].name:
  Invalid function name. Must start with a letter or an underscore.

Fix: filter out any Responses API tool that lacks a non-empty `name`
string before converting to `{ type: "function", function: { name, ... } }`.
Named function tools are unaffected; only unnamed hosted tools are skipped.

Fixes: Gemini 400 error when Codex CLI is routed through 9router.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-03-01 08:48:43 +00:00
Cengizhan
f763d4ffed fix: resolve GitHub Copilot 400 error for Claude models in Cursor IDE (#220)
- github.js: add sanitizeMessages() to convert unsupported content types
  (tool_use, tool_result, thinking) to plain text before sending to
  GitHub /chat/completions endpoint
- openai-responses.js: skip pushing message blocks with empty content
  (e.g. assistant messages that only contain tool_calls)
- providerModels.js: revert targetFormat changes (not needed with sanitize fix)

Fixes: https://github.com/decolua/9router/issues/219
2026-03-01 15:38:14 +07:00
decolua
a7365c5a4e Fix : Codex on cursor 2026-03-01 15:35:41 +07:00
decolua
2f4b813c5b feat(usage): implement timeout and error handling for antigravity usage and subscription requests
- Add a 10-second timeout for fetch requests in getAntigravityUsage and getAntigravitySubscriptionInfo functions.
- Include error logging for fetch failures in both functions.
- Update headers to include "x-request-source" for MITM bypass.
- Enhance proxyFetch with DNS resolution and MITM bypass capabilities.
- Ensure proxyFetch is loaded in the API route for proper fetch patching.
2026-02-28 12:12:49 +07:00
decolua
04ba66bc1e chore: Refactor CursorAuthModal to handle manual instructions for Windows users. 2026-02-28 12:12:49 +07:00
gen
5a015e5b4d feat(proxy): add outbound HTTP proxy support for OAuth + provider requests
- Patch Node fetch via undici ProxyAgent when HTTP_PROXY/HTTPS_PROXY/ALL_PROXY is set
- Ensure proxy patch is loaded for both chat pipeline and OAuth token exchange
- Add Dashboard Settings → Network to edit outbound proxy and apply immediately
- Persist outbound proxy settings in local db and initialize on server startup
- Move proxy helpers to src/lib/network/ for better structure
- Rename src/proxy.js → src/dashboardGuard.js to avoid naming confusion
- Re-apply proxy env after DB import
- Fix: close old dispatcher on proxy URL change to prevent connection pool leak
- Fix: idempotency guard to avoid patching globalThis.fetch multiple times

Made-with: Cursor
2026-02-28 10:11:53 +07:00
decolua
833069caac Fix MITM on window 2026-02-28 10:04:57 +07:00
decolua
5954b8f4eb - Refactor chatCore.js to streamline imports and remove unused functions.
- Fix streaming /v1/responses
2026-02-27 11:15:12 +07:00
decolua
25c2ad7360 feat: implement model lock functionality for connection management 2026-02-27 10:29:11 +07:00
decolua
1f4423d444 Merge remote-tracking branch 'origin/master' 2026-02-27 09:35:39 +07:00
decolua
0e285a9ed3 Merge branch 'pr-203' 2026-02-27 09:33:14 +07:00
decolua
9ae5487bc7 Fix antigravity 2026-02-27 09:17:49 +07:00
司徒玟琅
4527e5e126 feat: update provider models(Cursor IDE) with new versions (#204)
- Added new provider models: Claude 4.6 Opus Max, Claude 4.6 Sonnet Medium Thinking, Kimi K2.5, Gemini 3.1 Pro Preview, Gemini 3 Flash Preview, GPT 5.2, and GPT 5.3 Codex.
2026-02-27 09:09:20 +07:00
BiuBiu_Hu
d14c18f77f refactor: rename provider to alicode (Aliyun Coding)
Rename alicloud to alicode to clearly indicate Aliyun's Coding Plan service.

- Provider ID: alicode (short for Aliyun Coding)
- Model format: alicode/qwen3.5-plus
- Simplified mapping - no more bidirectional aliases

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-26 08:05:05 +08:00
BiuBiu_Hu
b0ec81f4a5 feat: add Alibaba Cloud Coding Plan support
Add support for Alibaba Cloud Bailian Coding Plan, a coding-focused AI service
that provides fixed monthly pricing for multiple models.

Changes:
- Add alicloud provider with OpenAI-compatible API endpoint
- Support 8 models: qwen3.5-plus, kimi-k2.5, glm-5, MiniMax-M2.5,
  qwen3-max, qwen3-coder-next, qwen3-coder-plus, glm-4.7
- Use "ali" as provider alias (ali/model format)
- Add API key validation and connection testing
- Add frontend provider definition with "ALi" text icon

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-26 07:58:40 +08:00
decolua
9003675b71 - Updated CLI tool components to accept initial status as a prop, improving state management for tool statuses.
- Added functionality to fetch and set statuses for various CLI tools (Claude, Codex, Droid, OpenClaw, Antigravity) on component mount.
- Enhanced error handling and logging in the OAuth provider test utilities and DNS management functions.
- Improved the MITM server to handle multiple target hosts and provide clearer error messages regarding port usage.
2026-02-25 16:32:05 +07:00
Quan
07717bad60 feat: cherry-pick PR #183 — multi-provider support, PWA, dynamic models, UI improvements
Cherry-picked from decolua/9router PR #183.
Note: open-sse changes included but need further review due to extensive modifications.

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-25 11:40:50 +07:00
decolua
8221f7c027 Fix MITM 2026-02-23 21:56:40 +07:00
decolua
d21f7aaadc Fix bug Tunnel 2026-02-22 21:44:11 +07:00
decolua
a5eb5a864e chore: add Gemini 3.1 Pro models to provider configurations 2026-02-22 15:20:24 +07:00
decolua
930e917092 chore: update version and enhance provider model configurations. 2026-02-22 11:30:43 +07:00
zx07
ea67742f2a feat: implement real project ID fetching for Antigravity (#170)
* feat: implement Project ID service to fetch and cache real Project IDs from Google Cloud Code API

* fix: implement caching and cleanup for Project ID retrieval

* feat: add project ID invalidation and refresh logic after token updates

* refactor: remove unnecessary format changes

* feat: add on-demand project ID retrieval for antigravity requests
2026-02-21 23:15:18 +07:00
decolua
94c4320632 Fix 2026-02-21 23:06:55 +07:00
decolua
0baa299722 feat :
- Added tunnel
- Removed cloud feature
2026-02-21 16:42:46 +07:00
decolua
adf57aa0c9 Fixed Codex 2026-02-21 14:36:06 +07:00
decolua
f2025cc776 feat: add Gemini 3.1 Pro models to provider 2026-02-20 21:05:02 +07:00
decolua
985985e454 refactor: update Antigravity model configurations and pricing 2026-02-20 17:52:15 +07:00
decolua
3debf84b9a Add Providers 2026-02-20 17:05:46 +07:00
Thiên Toán
806bd4ae14 feat: add API endpoint dimension to usage statistics dashboard (#152)
- Tracks endpoints like /v1/chat/completions, /v1/messages, /v1/responses
- New sortable/groupable table in usage dashboard with expandable groups
- Enhanced usage database aggregation by endpoint + model + provider
- Added endpoint tracking to all saveRequestUsage/saveRequestDetail calls
- Maintains backward compatibility with existing data structure
2026-02-20 15:03:18 +07:00
Hồ Xuân Dũng
a57a8ce206 feat: add Gemini embeddings support + Letta compatibility fixes
Cherry-picked from decolua/9router#148 (author: xuandung38 / Hồ Xuân Dũng <me@hxd.vn>)

- Add Google AI (Gemini) embeddings support for /v1/embeddings endpoint
- Add Gemini embedding models: gemini-embedding-001, text-embedding-005, text-embedding-004
- Inject missing object/created fields for Letta and strict OpenAI clients
- Strip Azure-specific fields (prompt_filter_results, content_filter_results) from responses
- Fix Dockerfile: copy open-sse directory into Docker runner stage

Skipped: whitelist message field stripping (commit 3/7/8) — too aggressive for all providers
Skipped: default stream=false change (commit 9) — behavior change needs further review
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-20 15:01:10 +07:00
decolua
e1e5a81613 feat: add GLM 5 and MiniMax M2.5 models to providerModels.js; add Claude Sonnet 4.6 to CLI tools 2026-02-20 14:44:53 +07:00
zx
a229d79158 feat(antigravity): initial steps for Antigravity anti-ban alignment
Cherry-picked from decolua/9router#141 (author: LinearSakana / zx <me@char.moe>)

- Implement client identity spoofing with numeric enums (ideType: 9, pluginType: 2)
- Add runtime platform detection for User-Agent and metadata
- Implement per-connection session ID caching (binary-compatible format)
- Add ANTIGRAVITY_HEADERS (X-Client-Name, X-Client-Version, x-goog-api-client)
- Add X-Machine-Session-Id header injection
- Align metadata/mode parameters across all Antigravity API calls
- Implement double injection for system prompt (raw + [ignore] wrapped)
- Rename internal anti-loop header to x-request-source for anonymity

Skipped: commit 6 (signature side-channel caching) — kept DEFAULT_THINKING_GEMINI_SIGNATURE
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-20 14:44:29 +07:00
Thiên Toán
9fbd6e619d fix: correct token extraction for Claude non-streaming responses (#131)
- Add response logging for non-streaming requests (5_res_provider.json, 7_res_client.json)
- Fix extractUsageFromResponse() to check Claude format before OpenAI format
- Prevents format misidentification that caused tokens to show as 0
- Claude uses input_tokens/output_tokens vs OpenAI's prompt_tokens/completion_tokens

Fixes dashboard Details tab showing 0 tokens for Claude requests
2026-02-20 14:24:21 +07:00
zx07
f933dd9c61 feat: add Qwen3.5 Coder Model configuration (#156)
Co-authored-by: zx <me@char.moe>
2026-02-19 21:55:11 +07:00
EdamAmex
c4aa4247bd feat: Add GPT 5.3 Codex to GitHub Copilot (#150)
* Add GPT-5.3 Codex model to providerModels.js

* Add pricing constants for gpt-5.3-codex
2026-02-19 12:10:34 +07:00
すずねーう
4e2a3f888c feat: Add Claude Sonnet 4.6 to GitHub Copilot (#149)
* Add Claude Sonnet 4.6 to GitHub Copilot

Claude Sonnet 4.6 is available in GitHub Copilot now.
https://github.blog/changelog/2026-02-17-claude-sonnet-4-6-is-now-generally-available-in-github-copilot/

* Add pricing constants for Claude Sonnet 4.6 for GitHub Copilot
2026-02-19 07:59:44 +07:00
decolua
b057c43c27 feat(open-sse): add Claude Sonnet 4.6 2026-02-18 13:31:32 +07:00
misterdas
b9a697925e fix(open-sse): emit [DONE] in passthrough SSE mode (#142) 2026-02-18 13:26:23 +07:00
HXD.VN
e1b836168a feat: add /v1/embeddings endpoint (OpenAI-compatible) (#146)
* feat: implement /v1/embeddings endpoint (#117)

Add OpenAI-compatible POST /v1/embeddings endpoint that routes through
the existing provider credential + fallback infrastructure.

Changes:
- open-sse/handlers/embeddingsCore.js: core handler (handleEmbeddingsCore)
  * Validates input (string or array), encoding_format
  * Builds provider-specific URL and headers for openai, openrouter,
    and openai-compatible providers
  * Handles 401/403 token refresh via executor.refreshCredentials
  * Returns normalized OpenAI-format response { object: 'list', data, model, usage }
- cloud/src/handlers/embeddings.js: cloud Worker handler (handleEmbeddings)
  * Auth + machineId resolution identical to handleChat
  * Provider credential fallback loop with rate-limit tracking
- cloud/src/index.js: wire new routes
  * POST /v1/embeddings  (new format — machineId from API key)
  * POST /{machineId}/v1/embeddings  (old format — machineId from URL)

* test: add unit tests for /v1/embeddings endpoint

- Setup vitest as test framework (tests/ directory)
- embeddingsCore.test.js (36 tests):
  - buildEmbeddingsBody: single string, array, encoding_format, default float
  - buildEmbeddingsUrl: openai, openrouter, openai-compatible-*, unsupported
  - buildEmbeddingsHeaders: per-provider headers, accessToken fallback
  - handleEmbeddingsCore: input validation, success path, provider errors,
    network errors, invalid JSON, token refresh 401 handling
- embeddings.cloud.test.js (23 tests):
  - CORS OPTIONS preflight
  - Auth: missing/invalid/old-format/wrong key → 401/400
  - Body validation: bad JSON, missing model, missing input, bad model → 400
  - Happy path: single string, array, delegation, CORS header, machineId override
  - Rate limiting: all-rate-limited → 429 + Retry-After, no credentials → 400
  - Error propagation: non-fallback errors, 429 exhausts accounts

Total: 59/59 tests passing
Framework: vitest v4.0.18, Node v22.22.0

* feat: add Next.js API route for /v1/embeddings endpoint

Wire the embeddings handler into Next.js App Router.

- src/app/api/v1/embeddings/route.js: Next.js API route (POST + OPTIONS)
- src/sse/handlers/embeddings.js: SSE-layer handler mirroring chat.js pattern

Uses handleEmbeddingsCore from open-sse/handlers/embeddingsCore.js with
the same auth, credential fallback, and token refresh logic as the chat
handler. Supports REQUIRE_API_KEY env var, provider fallback loop, and
consistent logging.
2026-02-18 13:24:02 +07:00
zx07
c7d44101b5 feat: add GPT 5.3 Codex Spark model to pricing and provider models (#133)
Co-authored-by: zx <me@char.moe>
2026-02-16 12:31:12 +07:00
decolua
3e4ca1889f - Add new model "minimax-m2.5" to providerModels. 2026-02-15 13:03:32 +07:00