Commit Graph

519 Commits

Author SHA1 Message Date
decolua
2b1faeb2c6 Fix : Qwen provider 2026-04-04 12:38:39 +07:00
decolua
d84489dba4 - Introduce default MITM router base URL and update related components to handle it.
- Add input for MITM router base URL in MitmServerCard component.
2026-04-04 11:25:58 +07:00
Anurag Saxena
2e740ad7e4 fix: pass isFree prop to ModelRow for custom models (closes #461) (#480) 2026-04-04 08:53:11 +07:00
Anurag Saxena
7f4f75a807 fix: pass HOME explicitly in sudo inlineCmd so MITM server resolves correct data dir (closes #478) (#482) 2026-04-04 08:52:31 +07:00
Anurag Saxena
5fe2c81cf9 fix: skip function_call items with empty/missing name to prevent Codex 400 error (closes #444) (#487) 2026-04-04 08:51:00 +07:00
Anurag Saxena
38eabae2e7 fix: retry /responses endpoint when GitHub returns model not supported (closes #470) (#488) 2026-04-04 08:50:38 +07:00
Anurag Saxena
006c337de5 fix: use which instead of command -v for openclaw CLI detection (closes #457) (#489) 2026-04-04 08:49:59 +07:00
decolua
93b8668e9e Fix AG 2026-04-01 11:48:38 +07:00
decolua
9708541f6d Fix bug 2026-03-31 15:44:19 +07:00
Vishal Raj V
8640503b36 feat(kilo): fetch free models from Kilo API + Windows build fixes (#455)
- Add /api/providers/kilo/free-models endpoint with 1hr cache
- Fetch and merge Kilo free models with hardcoded models for kilocode provider
- Display 'Free' badge on models fetched from Kilo API
- Fix Windows build: add cross-env, remove --webpack flag, add turbopack config
- Add outputFileTracingExcludes for Windows system directories
2026-03-31 09:22:21 +07:00
Kwan96
ffa172c92d fix(claude-to-openai): emit closing </think> tag instead of empty reasoning_content (#454)
Replace empty reasoning_content with explicit </think> closing tag when exiting thinking block to properly signal end of reasoning section in streaming responses.
2026-03-31 09:21:11 +07:00
decolua
01787a3d5b Fix bug 2026-03-30 17:27:15 +07:00
kwanLeeFrmVi
054facb08b fix(gemini): preserve thoughtSignature via tool_call ID smuggling + fix ELOCKED mutex
- Encode thoughtSignature into tool_call.id using _TSIG_ delimiter and base64url
- Decode _TSIG_ on request to restore thoughtSignature for Gemini multi-turn thinking
- Track pendingThoughtSignature across parts for deferred signature attachment
- Add LocalMutex (2-layer locking) to prevent ELOCKED on concurrent DB access
- Increase lockfile retries from 5 to 15 for multi-process robustness
- Restore db.json seed on first run to prevent ENOENT on lockfile.lock
- Use process.env.BASE_URL fallback in models test route
- Remove gemini-3-flash-lite-preview from provider models

Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes #450

Made-with: Cursor
2026-03-30 16:57:28 +07:00
kwanLeeFrmVi
1c160cc8d9 feat(claude-code): spoof TLS fingerprint and stabilize headers for Anthropic
- Add claudeHeaderCache.js to intercept and cache live Claude Code client headers
- Forward cached headers dynamically to api.anthropic.com via default.js
- Strip first-party identity headers (x-app, claude-code-* beta) for non-Anthropic upstreams
- Validate and sanitize tool call IDs to match Anthropic pattern (^[a-zA-Z0-9_-]+$)
- Skip thinking blocks when applying cache_control; fix max_tokens buffer (+1024)
- Strip cache_control from thinking blocks in openai-to-claude translator
- Comment out thoughtSignature in Gemini translator (kept for reference)
- Expand .gitignore to match all deploy*.sh variants

Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes #433

Made-with: Cursor
2026-03-30 16:27:28 +07:00
bitgineer
83354889cf fix: handle anthropic-compatible providers in BaseExecutor (#428)
The BaseExecutor's buildUrl() and buildHeaders() methods only handled
openai-compatible-* providers but not anthropic-compatible-* providers.
This caused Anthropic-compatible synthetic providers to fail API testing
by hitting the wrong endpoint (returning documentation instead of valid
API responses) and using incorrect auth headers.

Changes:
- Added buildUrl() handling for anthropic-compatible-* providers
  to append /messages path
- Added buildHeaders() handling for x-api-key header and
  anthropic-version for anthropic-compatible providers

Fixes #XXX

Co-authored-by: Bitgineer <bitgineer@bitgineer.shop>
2026-03-30 12:27:09 +07:00
Manuel B.
cd1e06b641 fix: add missing clientId to github provider config for OAuth token refresh (#442)
The github provider in open-sse/config/providers.js was missing clientId,
causing refreshGitHubToken() to send client_id=undefined on 401 retry.
Also guard against undefined clientSecret in both refresh implementations.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 12:24:31 +07:00
decolua
e6299eef56 Fix Bug 2026-03-30 12:21:24 +07:00
decolua
abbf8ec86f feat: add GitLab Duo and CodeBuddy support, update observability settings 2026-03-30 11:28:07 +07:00
decolua
11e6004fcb fix: correct finish_reason for tool calls in OpenAI Responses translator
Apply fix from PR #354 by @tannk4w to properly signal tool_calls finish_reason
when model emits tool calls, allowing OpenAI-compatible clients to continue with
tool result processing instead of stopping prematurely.

Refactored finish_reason logic into computeFinishReason() helper to eliminate
duplication and improve maintainability across flush and completion paths.

Co-authored-by: tannk4w <tannk@tmi-soft.vn>

Thanks to @tannk4w, @trungtq2799, @quanhavn, and @East-rayyy for the thorough
review and improvement suggestions on the original PR.

Made-with: Cursor
2026-03-28 14:45:07 +07:00
decolua
bf99c600f1 Fix Bug 2026-03-27 11:45:54 +07:00
Anurag Saxena
fcc8320753 feat: add OpenCode provider support (#387)
Adds OpenCode (https://github.com/opencode-ai/opencode) as a supported
provider. OpenCode is an open-source terminal AI coding assistant with
an OpenAI-compatible API running locally.

Changes:
- open-sse/config/providers.js: add opencode baseUrl (localhost:4096)
  with openai format (fully compatible, no custom headers needed)
- open-sse/services/model.js: add 'oc' alias → opencode
- src/shared/constants/providers.js: add opencode to subscription
  providers with alias 'oc', icon 'terminal', color #E87040

Usage after setup: use model prefix 'oc/<model>' to route through
a running OpenCode instance (e.g. oc/claude-sonnet-4-5).

Closes #378
2026-03-27 11:17:23 +07:00
decolua
3f47038933 fix: rename tunnelUrl to tunnelPublicUrl for clarity in CLIToolsPageClient 2026-03-27 11:03:49 +07:00
anuragg-saxenaa
f05d64e073 fix: use project-scoped Vertex URL for SA JSON auth and add ?alt=sse for streaming (closes #388)
When using SA JSON + Bearer token, Vertex AI requires a project-scoped URL.
The old code used the global publishers endpoint which only works with a raw API key,
causing RESOURCE_PROJECT_INVALID errors.

Changes in open-sse/executors/vertex.js buildUrl():
- SA JSON path: projects/{projectId}/locations/{location}/publishers/google/models/{model}:{action}
- Appends ?alt=sse for streaming on SA JSON path
- Location defaults to us-central1, overridable via providerSpecificData.location
- Raw API key path unchanged (global publishers + ?key= param)

Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
2026-03-27 11:03:49 +07:00
Anurag Saxena
5abf7102c0 fix: inject placeholder message when Responses API input[] is empty (closes #389) (#419) 2026-03-27 10:51:02 +07:00
Anurag Saxena
4e631c4f37 fix: map OpenAI image_url data URLs to Ollama images[] (closes #427) (#432) 2026-03-27 10:48:18 +07:00
Anurag Saxena
e3a7733a08 fix: strip functionCall/functionResponse id and synthetic thoughtSignature for Vertex AI (closes #388) (#414) 2026-03-27 10:46:47 +07:00
Anurag Saxena
a6c764d772 fix: use better-sqlite3 for Cursor auto-import, drop sqlite3 CLI requirement (closes #395) (#411) 2026-03-27 10:45:11 +07:00
Anurag Saxena
5e308e8ff2 fix: add ?alt=sse to Vertex streaming URL (closes #388) (#409) 2026-03-27 10:44:11 +07:00
Anurag Saxena
2f0fd348c5 fix: add deprecation warning for Gemini CLI provider (closes #362) (#406) 2026-03-27 10:41:35 +07:00
Anurag Saxena
ade3f57d4c fix: sanitize Gemini function names to meet API requirements (closes #369) (#403) 2026-03-27 10:40:29 +07:00
Ryan
56be393a59 feat: expand OpenAI and Gemini static model lists (#398)
OpenAI: add GPT-5.x series, GPT-4.1 variants, o3/o4 reasoning
models, embedding models, and TTS models (5 → 26 models).

Gemini: add 3.1 Flash Image, 3 Flash Lite, 2.0 Flash/Lite,
Embedding 2; remove deprecated 3 Pro Preview (10 → 14 models).

Closes #179, partially addresses #178.
2026-03-27 10:39:07 +07:00
Ryan
3b4184b09e fix: detect Claude format for /v1/messages + sanitize tool descriptions (#397) 2026-03-27 10:38:37 +07:00
Anurag Saxena
868eabffc0 fix: clamp Responses API call_id to 64 chars (closes #393) (#396) 2026-03-27 10:37:31 +07:00
decolua
8759545260 chore: add proper-lockfile for safe database read/write operations and implement retry logic for file access 2026-03-27 10:31:35 +07:00
decolua
3059df4014 chore: - Adjust opacity settings for ConnectionRow to improve user experience. 2026-03-26 10:48:53 +07:00
Ryan
99cb9ed11f fix: support HTTP/HTTPS image URLs in Claude and Gemini translators (#344)
Previously only base64 data: URLs were handled in the OpenAI-to-Claude
and OpenAI-to-Gemini request translators. HTTP/HTTPS image URLs were
silently dropped, causing vision-capable models to respond with
"I don't see any image."
2026-03-23 14:56:40 +07:00
decolua
8df8b94180 Enhance image support in Kiro for Claude models. Update the message conversion logic to conditionally handle image types based on model capabilities. Additionally, hide the Basic Chat option in the sidebar for a cleaner UI. 2026-03-23 12:29:48 +07:00
decolua
4496bf96c8 Feat : Kiro Provider can now read images. 2026-03-23 12:17:51 +07:00
decolua
0c9ad12055 Fix : Fix error 400 2026-03-23 12:05:22 +07:00
Ibrahim Ryan
e9ccae4ca1 fix(iflow): inject stream_options for usage data in streaming
Add stream_options: { include_usage: true } to iFlow streaming requests
to get token usage data in the final streaming chunk. This fixes token
counts showing as 0 for iFlow streaming requests.

Only injected when streaming is enabled and body.messages exists (OpenAI
format), and the client hasn't already set stream_options.

Note: Applied only to iFlow executor instead of BaseExecutor to avoid
affecting all providers globally. This gives us more control and allows
testing with iFlow first.

Fixes #74

Co-authored-by: Ibrahim Ryan <ryan@nuevanext.com>
Made-with: Cursor
2026-03-23 12:03:11 +07:00
Ibrahim Ryan
8312af79a4 fix(cursor): verify Cursor installation on Linux before auto-import
On Linux, verify that Cursor IDE is actually installed before importing
tokens. Previously, leftover config files from a removed Cursor installation
would trigger a false positive, creating a phantom Cursor provider connection.

The check uses `which cursor` and falls back to checking for a .desktop file
in ~/.local/share/applications/

Fixes #313

Co-authored-by: Ibrahim Ryan <ryan@nuevanext.com>
Made-with: Cursor
2026-03-23 10:31:48 +07:00
Ryan
97f2a00e74 fix: test Codex connection against actual endpoint (#347)
Change Codex test from token-expiry-only check to probing the real
Codex API endpoint. Sends a minimal request body that triggers a fast
400 without consuming quota. A 400 confirms auth works; only 401/403
indicates a bad token.

Also adds generic acceptStatuses support to the OAuth test framework
so other providers can define non-200 success statuses.
2026-03-23 10:29:28 +07:00
Ryan
1ed6c4c76f fix: prevent duplicate model aliases on import (#340) 2026-03-23 10:27:50 +07:00
Ryan
037d013af8 fix: skip disabled providers in combo fallback instead of returning 406 (#336)
When a provider has credentials but all are disabled, return 404 (NOT_FOUND)
instead of 400 (BAD_REQUEST). The combo handler already treats 404 as a
fallbackable error, so it will skip to the next model in the chain.

Previously, the 400 status caused the combo to stop with a hard error,
killing the client (e.g., Claude Code) even though other models in the
combo chain were available.

Also changed log level from error to warn since disabled credentials
are an expected configuration state, not an error.

Fixes #334
2026-03-23 10:25:35 +07:00
decolua
312dd749fe fix: externalize better-sqlite3 for Next.js standalone builds
Move better-sqlite3 to optionalDependencies so npm install doesn't
fail on platforms without native build tools. Add it to
serverExternalPackages so Next.js doesn't try to bundle the native
addon into webpack chunks.

Fixes #243
Fixes #184

Thanks @East-rayyy for the contribution! Sorry for the delay in reviewing.

Co-authored-by: Ibrahim Ryan <ryan@nuevanext.com>
Made-with: Cursor
2026-03-23 10:22:17 +07:00
decolua
3e694a383f feat(combos): add per-combo round-robin strategy
Add ability to configure round-robin strategy for individual combos,
similar to per-provider strategy overrides.

Changes:
- Add comboStrategies setting to store per-combo strategy overrides
- Add Round Robin toggle to each combo card in combos page
- Update chat handler to check combo-specific strategy before global
- Combo-specific strategy takes precedence over global comboStrategy

When enabled, each request to that combo will cycle through providers
instead of always starting with the first one.

Made-with: Cursor
2026-03-23 10:08:24 +07:00
bitgineer
96f5e5c92a Add combo round-robin strategy to distribute load across providers (#390)
- Add comboRotationState Map to track rotation per combo
- Add getRotatedModels() to rotate model order based on strategy
- Pass comboName and comboStrategy to handleComboChat()
- Add comboStrategy setting (default: fallback)
- Add UI toggle for Combo Round Robin in profile settings

When enabled, each request to a combo starts with a different provider
instead of always starting with the first one, distributing load evenly.

Co-authored-by: Antigravity Agent <antigravity@example.com>
2026-03-23 09:52:31 +07:00
Nguyễn Trung Hiếu
6b0cced884 feat(ui): add Basic Chat interface for testing models
Add a simple chat UI to the dashboard for quickly testing AI models from
connected providers. Features include:
- Model picker from all connected providers
- Streaming chat responses
- Image attachment support
- Session history with localStorage persistence
- Responsive design with dark theme

Note: Removed build.sh from original PR as it contained syntax errors and
was unrelated to the chat UI feature.

Co-authored-by: Nguyễn Trung Hiếu <140531897+bonelag@users.noreply.github.com>
Made-with: Cursor
2026-03-23 09:45:04 +07:00
Liam
01e4a28f0a fix: normalize finish_reason to 'tool_calls' when tool calls are present (#379)
Some upstream providers (e.g. Antigravity) return non-standard finish_reason
values like 'other' instead of the OpenAI-standard 'tool_calls' when the
model invokes tools. This causes downstream consumers (e.g. OpenClaw) to
fail to execute tool calls, breaking agentic sub-agent workflows.

Changes:
- nonStreamingHandler: post-translation guard that normalizes finish_reason
  to 'tool_calls' when message.tool_calls is present
- sseToJsonHandler: accumulate tool_calls from streaming deltas in
  parseSSEToOpenAIResponse; extract function_call items from Responses API
  output in handleForcedSSEToJson
- openai-responses translator: use toolCallIndex to choose between
  'tool_calls' and 'stop' in flush and response.completed events

Tested: 7 scenarios (non-stream text, single/multiple tool calls, stream
text/tool calls, multi-turn tool conversation, tools present but unused)
2026-03-23 09:35:25 +07:00
Anurag Saxena
b8918c0c1c fix: treat Kiro 400 'improperly formed request' as model-unavailable (#386)
Kiro returns HTTP 400 with 'Improperly formed request (reset after Xs)'
when a model is not available on that account's subscription tier.
Previously this fell through to COOLDOWN_MS.transient (30s), causing
rapid retries on all accounts before failing — all accounts get locked
simultaneously with no actual fallback.

Treating this as paymentRequired (2min cooldown) ensures:
1. The model is locked on that account for 2min (proper cooldown)
2. The next available account is tried immediately
3. If all accounts hit the same 400, 9Router falls through to the
   next provider in the combo

Fixes #384
2026-03-23 09:31:31 +07:00