Match native GeminiCLI client fingerprint to avoid upstream rejection.
Also fix base executor to call transformRequest before buildHeaders so
subclasses can store model context for header generation.
Made-with: Cursor
- Centralize proxy management with reusable proxy pools
- Per-connection proxy binding with legacy fallback
- Add strictProxy option: fail hard instead of silently falling back to direct
- Resolve alicode-intl conflict: keep alicode-intl support + proxy support
Made-with: Cursor
- Introduce BATCH_SIZE configuration for parallel language translation
- Update translation logic to process languages in batches with a delay to avoid rate limits
- Enhance logging to display current batch being processed
Also, add a new TimeAgo component for auto-updating time display in UsageStats.
- Add GitHub Actions workflow to auto-translate README.md
- Support Vietnamese and Simplified Chinese
- Use GLM-5 API with streaming mode
- Auto-commit translations to i18n/ folder
- Trigger on README.md changes or manual dispatch
Made-with: Cursor
- Implement runtime i18n using MutationObserver for automatic DOM translation
- Add language switcher dropdown in dashboard header (EN/VI/ZH)
- Support 3 languages: English (default), Tiếng Việt, 简体中文
- Add translation files: vi.json (197 entries), zh-CN.json (513 entries, cleaned)
- Translate dashboard UI: sidebar menu, header, settings, MITM page
- Use cookie-based locale persistence with /api/locale endpoint
- Zero component changes required - translations applied at runtime
- Fix Header flicker on route change with key={pathname}
Co-authored-by: eachann <each1024@qq.com>
Based on PR #247 from decolua/9router with runtime approach
Made-with: Cursor
- Add buildQwenBaseUrl function to construct URLs for Qwen resources.
- Update buildProviderUrl to support Qwen model requests.
- Enhance token refresh logic to include provider-specific data for Qwen.
- Refactor CLI Tools page to exclude MITM tools and streamline model retrieval.
- Introduce new components for MITM server management.
- Update API routes to handle Qwen-specific resource URLs and improve error handling.
Cursor sends images as Chat Completions format:
{ type: "image_url", image_url: { url: "data:...", detail: "auto" } }
But Codex Responses API requires:
{ type: "input_image", image_url: "data:..." }
- openai-responses.js: bidirectional conversion image_url <-> input_image
- responsesApiHelper.js: input_image -> image_url in Responses->Chat path
- codex.js: safety net conversion in executor before sending to Codex API
Note: Cursor has a known bug where images bypass the Override OpenAI Base URL
and are sent directly to api.openai.com. This fix is effective for other clients
(curl, Codex CLI, Claude Code) that route through the proxy correctly.
Made-with: Cursor
Dashboard routes were accessible without authentication because
dashboardGuard.js was not connected to Next.js middleware pipeline.
Created src/proxy.js to export the auth guard using Next.js 16
proxy convention. Now /dashboard routes properly redirect to
/login when auth_token cookie is missing or invalid.
* fix(translator): filter nameless hosted tools when converting Responses API to Chat format
Codex CLI sends "hosted" tools (e.g. `request_user_input`) via the OpenAI
Responses API. These tools have no explicit `name` field. The previous
`body.tools.map()` pass propagated `name: undefined` into the resulting
Chat Completions function declarations, which then became anonymous
`functionDeclarations` after the OpenAI→Gemini translation step.
Gemini strictly requires every function declaration to have a valid name
and rejects the entire request with:
GenerateContentRequest.tools[0].function_declarations[4].name:
Invalid function name. Must start with a letter or an underscore.
Fix: filter out any Responses API tool that lacks a non-empty `name`
string before converting to `{ type: "function", function: { name, ... } }`.
Named function tools are unaffected; only unnamed hosted tools are skipped.
Fixes: Gemini 400 error when Codex CLI is routed through 9router.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* feat(gemini): convert OpenAI SSE to Gemini SSE format in /v1beta/models route
The @google/genai SDK always uses :streamGenerateContent?alt=sse for chat
and expects Gemini SSE chunk format. The upstream handleChat returns OpenAI
SSE format, causing the SDK to crash on the [DONE] sentinel.
Changes:
- Add transformOpenAISSEToGeminiSSE() using TransformStream that converts
each OpenAI SSE chunk (choices[0].delta) to Gemini SSE format
(candidates[0].content.parts) on the fly
- Drop the OpenAI [DONE] sentinel (Gemini SSE ends by stream close)
- Map finish_reason -> finishReason, attach usageMetadata on final chunk
- Support reasoning_content -> thought: true parts for thinking models
- Refactor finishReasonMap to shared FINISH_REASON_MAP constant
- Fix streaming dispatch: stream=true now calls transformOpenAISSEToGeminiSSE
instead of passing OpenAI SSE through raw
Fixes: SyntaxError: "[DONE]" is not valid JSON in Gemini CLI
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>