Previously only base64 data: URLs were handled in the OpenAI-to-Claude
and OpenAI-to-Gemini request translators. HTTP/HTTPS image URLs were
silently dropped, causing vision-capable models to respond with
"I don't see any image."
Add stream_options: { include_usage: true } to iFlow streaming requests
to get token usage data in the final streaming chunk. This fixes token
counts showing as 0 for iFlow streaming requests.
Only injected when streaming is enabled and body.messages exists (OpenAI
format), and the client hasn't already set stream_options.
Note: Applied only to iFlow executor instead of BaseExecutor to avoid
affecting all providers globally. This gives us more control and allows
testing with iFlow first.
Fixes#74
Co-authored-by: Ibrahim Ryan <ryan@nuevanext.com>
Made-with: Cursor
On Linux, verify that Cursor IDE is actually installed before importing
tokens. Previously, leftover config files from a removed Cursor installation
would trigger a false positive, creating a phantom Cursor provider connection.
The check uses `which cursor` and falls back to checking for a .desktop file
in ~/.local/share/applications/
Fixes#313
Co-authored-by: Ibrahim Ryan <ryan@nuevanext.com>
Made-with: Cursor
Change Codex test from token-expiry-only check to probing the real
Codex API endpoint. Sends a minimal request body that triggers a fast
400 without consuming quota. A 400 confirms auth works; only 401/403
indicates a bad token.
Also adds generic acceptStatuses support to the OAuth test framework
so other providers can define non-200 success statuses.
When a provider has credentials but all are disabled, return 404 (NOT_FOUND)
instead of 400 (BAD_REQUEST). The combo handler already treats 404 as a
fallbackable error, so it will skip to the next model in the chain.
Previously, the 400 status caused the combo to stop with a hard error,
killing the client (e.g., Claude Code) even though other models in the
combo chain were available.
Also changed log level from error to warn since disabled credentials
are an expected configuration state, not an error.
Fixes#334
Move better-sqlite3 to optionalDependencies so npm install doesn't
fail on platforms without native build tools. Add it to
serverExternalPackages so Next.js doesn't try to bundle the native
addon into webpack chunks.
Fixes#243Fixes#184
Thanks @East-rayyy for the contribution! Sorry for the delay in reviewing.
Co-authored-by: Ibrahim Ryan <ryan@nuevanext.com>
Made-with: Cursor
Add ability to configure round-robin strategy for individual combos,
similar to per-provider strategy overrides.
Changes:
- Add comboStrategies setting to store per-combo strategy overrides
- Add Round Robin toggle to each combo card in combos page
- Update chat handler to check combo-specific strategy before global
- Combo-specific strategy takes precedence over global comboStrategy
When enabled, each request to that combo will cycle through providers
instead of always starting with the first one.
Made-with: Cursor
- Add comboRotationState Map to track rotation per combo
- Add getRotatedModels() to rotate model order based on strategy
- Pass comboName and comboStrategy to handleComboChat()
- Add comboStrategy setting (default: fallback)
- Add UI toggle for Combo Round Robin in profile settings
When enabled, each request to a combo starts with a different provider
instead of always starting with the first one, distributing load evenly.
Co-authored-by: Antigravity Agent <antigravity@example.com>
Add a simple chat UI to the dashboard for quickly testing AI models from
connected providers. Features include:
- Model picker from all connected providers
- Streaming chat responses
- Image attachment support
- Session history with localStorage persistence
- Responsive design with dark theme
Note: Removed build.sh from original PR as it contained syntax errors and
was unrelated to the chat UI feature.
Co-authored-by: Nguyễn Trung Hiếu <140531897+bonelag@users.noreply.github.com>
Made-with: Cursor
Some upstream providers (e.g. Antigravity) return non-standard finish_reason
values like 'other' instead of the OpenAI-standard 'tool_calls' when the
model invokes tools. This causes downstream consumers (e.g. OpenClaw) to
fail to execute tool calls, breaking agentic sub-agent workflows.
Changes:
- nonStreamingHandler: post-translation guard that normalizes finish_reason
to 'tool_calls' when message.tool_calls is present
- sseToJsonHandler: accumulate tool_calls from streaming deltas in
parseSSEToOpenAIResponse; extract function_call items from Responses API
output in handleForcedSSEToJson
- openai-responses translator: use toolCallIndex to choose between
'tool_calls' and 'stop' in flush and response.completed events
Tested: 7 scenarios (non-stream text, single/multiple tool calls, stream
text/tool calls, multi-turn tool conversation, tools present but unused)
Kiro returns HTTP 400 with 'Improperly formed request (reset after Xs)'
when a model is not available on that account's subscription tier.
Previously this fell through to COOLDOWN_MS.transient (30s), causing
rapid retries on all accounts before failing — all accounts get locked
simultaneously with no actual fallback.
Treating this as paymentRequired (2min cooldown) ensures:
1. The model is locked on that account for 2min (proper cooldown)
2. The next available account is tried immediately
3. If all accounts hit the same 400, 9Router falls through to the
next provider in the combo
Fixes#384
Root cause: Codex/OpenAI Responses streams multiple alternating reasoning and
message output items. The first message block often has empty output_text; the
visible answer lives in a later message. Previous code used output.find() which
always picked the first (empty) message block.
Fix: walk message items from end and use the last message whose extracted text
is non-empty; fall back to final message if all are empty.
Note: Removed debug logging code from original PR #383 to keep implementation clean.
Co-authored-by: lokinh <locnh@uniultra.xyz>
Made-with: Cursor
- fixes#335: on transient 503/502/504, wait for short cooldown (up to
5s) before falling to next combo model, giving the provider a chance
to recover rather than immediately skipping it
- fixes#334: when all combo models have no active credentials, return
503 (Service Unavailable) instead of 406 (Not Acceptable), which is
more accurate and retriable by clients
Gemini API requires enum properties to have an explicit type:"string"
declaration. Without it, tool calls with enum parameters return 400
Bad Request. Fixes#359.
* fix(usage): track lifetime request total beyond history cap
* fix(ui): restore provider assets and model availability endpoint
* fix(cursor): remove sql.js dependency from auto-import route
- Replaced message state with modalError in both components for better error management.
- Removed unused message display logic and adjusted action handling to improve clarity.
- Enhanced error handling in doAction and doDnsAction functions to ignore errors gracefully.
- Updated API call responses to streamline user feedback on actions.
Add MiniMax-M2.7 to provider models and pricing config alongside
existing M2.5. M2.7 is the latest reasoning model with 204K context.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Move better-sqlite3 to optionalDependencies so npm install doesn't
fail on platforms without native build tools. Add it to
serverExternalPackages so Next.js doesn't try to bundle the native
addon into webpack chunks.
Fixes#243
- Simplify ANTIGRAVITY_HEADERS to dynamic User-Agent only
- Use IDE_TYPE, PLUGIN_TYPE enums and getPlatformEnum() in metadata
- Update antigravity baseUrl to sandbox endpoint
- Bump User-Agent version from 1.104.0 to 1.107.0
- Remove redundant header spread in AntigravityExecutor
Made-with: Cursor
- Remove hard exit when ROUTER_API_KEY is missing
- Conditionally attach Authorization header only if API_KEY is set
- Allow MITM auto-start without requiring an active API key
- Fallback to default key when no active key is found
Made-with: Cursor
- Add 30 new locale files (ar, bn, cs, da, de, el, es, fi, fr, he, hi, hu, id, it, ja, ko, nl, no, pl, pt-BR, pt-PT, ro, ru, sv, th, tl, tr, uk, ur, zh-TW)
- Update i18n config to register new languages
- Update LanguageSwitcher component to support expanded language list
- Update localDb and pricing constants for i18n compatibility
Made-with: Cursor
* feat: add modelId fallback for provider validation
- If /models endpoint unavailable, validate via /chat/completions
- Add optional Model ID input in EditCompatibleNodeModal
- Improves compatibility with providers lacking /models endpoint
* feat: improve provider validation with modelId fallback
- Add Model ID input for chat/completions fallback validation
- Reorder UI: API Key → Model ID → Check button + Badge
- Display detailed BE error messages in FE
- Add status-specific error handling (401/403/400/404/5xx)
- Add unit tests for error message helpers
- Add vitest devDependency