Commit Graph

289 Commits

Author SHA1 Message Date
decolua
ad661c1286 feat: enhance CommandCode integration with improved message handling 2026-05-07 23:02:07 +07:00
decolua
b72a443bd3 feat: add CommandCode provider support 2026-05-07 23:01:33 +07:00
decolua
0d61a1d546 feat: add OllamaLocalExecutor and update provider handling
- Introduced OllamaLocalExecutor to handle requests for the "ollama-local" provider.
- Removed the direct URL construction for "ollama-local" from BaseExecutor.
- Updated index.js to include the new OllamaLocalExecutor in the executors mapping.
- Enhanced the ProvidersPage component to support dynamic addition of OpenAI/Anthropic compatible providers.
2026-05-07 16:42:36 +07:00
Muhammad Mugni Hadi
7f93df3a92 feat: add audio input support for Gemini translation (#913)
Add input_audio and audio_url content type handlers to
convertOpenAIContentToParts() in geminiHelper.js, converting
OpenAI audio format to Gemini inlineData format.

Also add audio types to VALID_OPENAI_CONTENT_TYPES in
openaiHelper.js so they are not stripped by filterToOpenAIFormat().

Fixes #912
2026-05-07 15:51:30 +07:00
decolua
5c62e73cc6 - Cowork: ComboFormModal
- BaseUrlSelect: add cloud endpoint option, custom URL local state, always
  default to first option; new cliEndpointMatch helper; CLI tool cards refactor
- API: new /v1/audio/voices and /v1/models/info; /v1/models filters disabled
  models, drop unused timestamp
- initializeApp: guard tunnel/tailscale auto-resume to once-per-process
- geminiHelper: ensureObjectType for schemas with properties but no type
- skills: minor SKILL.md tweaks (chat/embeddings/image/stt/tts/web-*)
2026-05-07 15:45:09 +07:00
decolua
d4bc42e1f5 feat: add STT support, Gemini TTS, and expand usage tracking
- Speech-to-Text: full pipeline with sttCore handler, /v1/audio/transcriptions
  endpoint, sttConfig for OpenAI, Gemini, Groq, Deepgram, AssemblyAI,
  HuggingFace, NVIDIA Parakeet; new 9router-stt skill
- Gemini TTS: add gemini provider with 30 prebuilt voices and TTS_PROVIDER_CONFIG
- Usage: implement GLM (intl/cn) and MiniMax (intl/cn) quota fetchers; refactor
  Gemini CLI usage to use retrieveUserQuota with per-model buckets
- Disabled models: lowdb-backed disabledModelsDb + /api/models/disabled route
- Header search: reusable Zustand store (headerSearchStore) wired into Header
- CLI tools: add Claude Cowork tool card and cowork-settings API
- Providers: introduce mediaPriority sorting in getProvidersByKind, add
  Kimi K2.6, reorder hermes, drop qwen STT kind
- UI: expand media-providers/[kind]/[id] page (+314), enhance OAuthModal,
  ModelSelectModal, ProviderTopology, ProxyPools, ProviderLimits
- Assets: refresh provider PNGs (alicode, byteplus, cloudflare-ai, nvidia,
  ollama, vertex, volcengine-ark) and add aws-polly, fal-ai, jina-ai, recraft,
  runwayml, stability-ai, topaz, black-forest-labs
2026-05-05 10:32:59 +07:00
decolua
9c6be62a54 Feat : Skills 2026-05-04 11:29:02 +07:00
decolua
4ba546afe7 Enhance token refresh logic and improve MITM server handling
- Introduced a caching mechanism for in-flight token refresh requests to prevent race conditions and reduce unnecessary API calls.
- Added error handling for unrecoverable refresh errors, ensuring that the application can gracefully handle token reuse and invalidation scenarios.
- Updated the MITM server management to handle port 443 conflicts, allowing users to kill processes occupying the port before starting the server.
- Improved user feedback in the MitmServerCard component regarding port conflicts and admin privileges.
- Refactored the ComboList component to streamline the display of media provider combos.

This update aims to enhance the reliability and user experience of the token management and MITM functionalities.
2026-05-03 22:10:03 +07:00
Anurag Saxena
8bdaeedb28 fix: strip stream_options for qwen non-streaming Claude Code requests (closes #557) (#663) 2026-05-03 15:21:43 +07:00
Anurag Saxena
67ca219fbf fix: update Qwen OAuth URLs from chat.qwen.ai to qwen.ai (closes #574) (#687) 2026-05-03 15:18:45 +07:00
Rezky Hamid
a463ee00ff feat(codex): add review model quota support (#836) 2026-05-03 14:57:33 +07:00
Zhen
14ff538f2e Improve mobile layouts and restore Cloudflare provider (#840)
Co-authored-by: Delynn Assistant <zhen@dkzhen.org>
2026-05-03 14:55:43 +07:00
decolua
f410061e70 Refactor proxyFetch and enhance MediaProviderDetailPage layout
- Removed the isCloud check from proxyFetch.js, simplifying the fetch patching logic.
- Updated MediaProviderDetailPage to include a new section for API key retrieval, improving user experience with clearer layout and additional notice text.
- Enhanced ConnectionRow to better handle email display names.
- Improved ProviderDetailPage to conditionally render provider notices and API key links.
- Refactored localDb, requestDetailsDb, and usageDb to remove unnecessary isCloud checks, streamlining database interactions.
- Updated OAuthModal to combine waiting and manual input steps for a more cohesive user flow.
- Added API key URLs to several providers in providers.js for better accessibility.
2026-05-01 17:03:13 +07:00
decolua
f8d2a9ff76 Merge branch 'master' of https://github.com/decolua/9router 2026-05-01 16:37:11 +07:00
Abhishek Divekar
3f17ee0e21 Add sticky round-robin for combos (#831)
Made-with: Cursor
2026-05-01 16:36:36 +07:00
decolua
b0da7c1211 Add Xiaomi MiMo provider support
- Introduced Xiaomi MiMo as a new provider in providerModels.js and providers.js.
- Updated model alias mapping in model.js to include Xiaomi MiMo.
- Enhanced validation route to support Xiaomi MiMo API endpoints.
- Added Xiaomi MiMo to APIKEY_PROVIDERS with relevant details.

This update expands the range of supported providers, improving integration capabilities.
2026-05-01 16:32:25 +07:00
thuanhuynhh
9ca388972c Update providerModels.js (#818)
KIMI K2.5 will be deprecated on 05/05/2026. Update latest Minimax version
2026-05-01 16:20:40 +07:00
Rezky Hamid
30b114ab75 fix: strip output_config for MiniMax (#820) 2026-05-01 16:16:01 +07:00
decolua
936d65ae1c Enhance chat handling and introduce Caveman feature
- Refactored handleChatCore to include Caveman functionality, allowing for terse-style system prompts to reduce output token usage.
- Updated APIPageClient to manage Caveman settings, including enabling/disabling and selecting compression levels.
- Adjusted AntigravityExecutor to consolidate function declarations for compatibility with Gemini.
- Removed unnecessary console logs during translator initialization across multiple routes.
2026-04-30 18:00:38 +07:00
decolua
512e3de371 Update version to 0.4.9, enhance README with Trendshift badge, and add new embedding models to providerModels.js. Refactor TTS handling to support additional providers and improve API key validation for media providers. 2026-04-29 11:34:39 +07:00
decolua
e8aa5e2222 Fix : Add reasoning_content placeholder for DeepSeek thinking models 2026-04-29 09:34:24 +07:00
decolua
8f81363675 Enhance token refresh functionality across multiple executors
- Updated refreshCredentials methods in various executors (Antigravity, Base, Default, Github, Kiro) to accept optional proxyOptions for improved proxy handling.
- Modified token refresh logic to utilize proxy-aware fetch for better network management.
- Enhanced usage retrieval functions to support proxy options, ensuring seamless integration with proxy configurations.
- Updated ModelSelectModal and ProviderInfoCard components to incorporate kind filtering for improved user experience in model selection.
- Added validation for API keys in the provider validation route, including support for webSearch/webFetch providers.
2026-04-28 17:28:57 +07:00
decolua
1bb621317d Add Cloudflare AI provider support and enhance connection management
- Introduced Cloudflare AI as a new provider with specific configurations in providerModels.js and providers.js.
- Updated DefaultExecutor to handle account ID resolution for Cloudflare AI connections.
- Enhanced AddApiKeyModal and EditConnectionModal to include account ID input for Cloudflare AI.
- Implemented validation for Cloudflare AI API key connections in testUtils.js and route.js.
- Updated UI components to reflect changes in provider management and connection handling.
2026-04-28 11:07:39 +07:00
decolua
111e78940a Refactor cloudflared process management to improve port-specific termination and enhance tunnel management. Update Antigravity cloaking comments for clarity. 2026-04-28 10:20:31 +07:00
decolua
a3032f7a3e Merge branch 'pr-779-review' 2026-04-28 10:16:23 +07:00
Zhen
85959aac22 Fix quota reset timestamp parsing (#768)
Co-authored-by: Delynn Assistant <zhen@dkzhen.org>
2026-04-28 10:05:54 +07:00
Manuel B.
58a821d687 fix: granular reasoning_effort handling for Claude models (#791)
- github.js: split thinking vs reasoning_effort stripping
  - thinking (Claude-native format) still stripped for all Claude on Copilot
  - reasoning_effort now passed through for Opus 4.6 and Sonnet 4.6
  - still stripped for Haiku 4.5 and Opus 4.7 (rejected upstream)
  - reasoning_effort "none" stripped for all models (not all support it)
- openai-to-claude.js: map reasoning_effort → thinking.budget_tokens
  for direct Anthropic backend (none→skip, low→4096, medium→8192,
  high→16384, xhigh→32768)

Previously reasoning_effort was stripped for ALL Claude models,
meaning Opus 4.6 via Copilot never received thinking configuration.

AI-generated commit by Claude Opus 4.6 (Anthropic)
2026-04-28 09:49:27 +07:00
lukmanfauzie
c43f8c54d4 fix: Antigravity INVALID_ARGUMENT errors and Copilot agent mode parity 2026-04-26 19:53:08 +08:00
lukmanfauzie
222e22fa53 Fix GitHub Copilot agent mode with Antigravity
Co-authored-by: Copilot <copilot@github.com>
2026-04-26 17:47:13 +08:00
decolua
83418e8a9d Add codex to image providers 2026-04-25 17:01:40 +07:00
decolua
14ff69bf90 - Added BytePlus Provider 2026-04-25 17:00:39 +07:00
decolua
0b8bed5793 Enhance image and embedding provider support
- Added new image models for GPT 5.2, 5.3, and 5.4, including capabilities for text-to-image and editing.
- Updated embedding handling to include optional dimensions in requests.
- Introduced support for custom embedding providers, allowing dynamic fetching and validation of custom nodes.
- Improved image generation handling with Codex integration, including progress tracking and error handling.
- Enhanced UI components to support adding custom embeddings and displaying their status.
2026-04-25 16:22:30 +07:00
decolua
cca615eaff - Cap maximum cooldown for rate limit handling in account unavailability and single-model chat flows
- Dynamic custom model fetching for model selection
2026-04-24 16:14:18 +07:00
decolua
030fb34f88 - Updated markAccountUnavailable function to accept resetsAtMs for precise cooldown management.
- Added email backfill functionality for Codex OAuth connections to improve account information accuracy.
2026-04-24 11:36:16 +07:00
decolua
0aff9a502c fix: enhance retry logic and configuration for HTTP status codes 2026-04-24 10:07:08 +07:00
decolua
65f11a603e feat: add Azure OpenAI provider support 2026-04-24 10:04:59 +07:00
decolua
5abc9e5c74 add GPT 5.5 model 2026-04-24 09:51:05 +07:00
kenlin
f2e7a98ce0 feat: add built-in Volcengine Ark provider support (#741)
Add Volcengine Ark as a first-class API key provider with official model presets, endpoint configuration, API key validation, model discovery, connection testing, provider logo, and runtime alias mapping for `ark/*` model IDs.

Made-with: Cursor

Co-authored-by: kingsy <kingsylin@vip.qq.com>
2026-04-24 09:28:50 +07:00
decolua
368f4c3e7f - Added Hermes tool to CLI tools and updated related components. 2026-04-23 16:39:31 +07:00
decolua
8de9aae90c Feature : RTK compress 2026-04-22 15:36:51 +07:00
decolua
81cef7d022 Update version 2026-04-22 14:22:47 +07:00
decolua
45731ae639 feat: add OpenCode Go provider and support for custom models
- Introduced OpenCode Go provider with relevant configurations.
- Enhanced model management by allowing users to add and delete custom models.
- Updated UI components to support model selection for image types.
- Adjusted sidebar visibility to include image media kinds.
2026-04-22 14:16:21 +07:00
decolua
abb04c5366 feat: add support for Grok Web and Perplexity Web providers 2026-04-22 11:58:53 +07:00
decolua
eeb2dc9e30 fix(providerModels): remove deprecated DeepSeek 3.1 entry from provider models 2026-04-22 10:47:38 +07:00
anuragg-saxenaa
4638cf0e81 fix(ollama-local): support custom host URL for remote Ollama servers (closes #578)
Add a shared resolveOllamaLocalHost() helper and wire it through the
executor, models/validate/test routes, so users can point ollama-local
at a remote Ollama instance instead of being locked to localhost:11434.

Also expose the host as an "Ollama Host URL" field in AddApiKeyModal
(empty = default localhost:11434), making the option reachable from the
dashboard without hand-editing db.json.

Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
2026-04-22 10:45:46 +07:00
Anurag Saxena
94ab0d715d fix: update Qwen OAuth URLs from chat.qwen.ai to qwen.ai (closes #572) (#683)
* fix: make version update banner clickable to copy install command (closes #598)

* fix: resolve ollama-local baseUrl from providerSpecificData.baseUrl for remote Ollama hosts (closes #578)

* fix: add Ollama Cloud to usage/quota tracking (closes #681)

* fix: update Qwen OAuth URLs from chat.qwen.ai to qwen.ai per issue #572
2026-04-22 10:32:28 +07:00
Anurag Saxena
37f7e97348 fix: force Agent mode in Cursor protobuf when User-Agent contains Claude Code (closes #643) (#692) 2026-04-22 10:24:58 +07:00
omar-nahhas
95841f9a48 fix(github): preserve reasoning_effort for non-Claude models (#713)
The previous blanket strip in GithubExecutor.transformRequest removed
`thinking` AND `reasoning_effort` for every GitHub-routed model to avoid
Claude-on-Copilot 400s from OpenClaw. That regressed GPT-5 family support
(gh/gpt-5-mini honors reasoning_effort: low/medium/high).

Make supportsThinking(model) model-aware — return false only for Claude
models, so the strip fires only where the upstream actually rejects these
fields.

Benchmarks on /v1/chat/completions via GitHub Copilot:
  effort=(none) → 64 reasoning_tokens, ~2.0s
  effort=low    → 0  reasoning_tokens, ~1.55s
  effort=medium → 64 reasoning_tokens, ~1.9s
  effort=high   → 128 reasoning_tokens, ~2.2s

Made-with: Cursor
2026-04-22 10:23:31 +07:00
Bexultan
4b9a955c4b chore: refresh provider model list (#723) 2026-04-22 09:57:52 +07:00
decolua
6ab9927a28 fix: update backoff configuration and improve CLI detection messages
- Added installation guides for manual configuration in DroidToolCard.js and other tool cards to assist users in setting up the necessary CLI tools.
2026-04-17 12:54:50 +07:00