289 Commits

Author SHA1 Message Date
anuragg-saxenaa
afe09f3f14 fix(github): strip thinking/reasoning_effort for Copilot chat completions (closes #623)
GitHub Copilot /chat/completions endpoint does not support the thinking
or reasoning_effort fields. OpenClaw sends thinking: { type: "enabled" }
for Claude models which causes a 400 Bad Request.

Added supportsThinking() and strip both fields in transformRequest before
sending to the upstream endpoint.

Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
2026-04-17 12:54:50 +07:00
Anurag Saxena
2e8784cf79 fix: show quota auth expired message for Kiro social auth accounts (closes #588) (#620)
* fix: add multi-model support for Factory Droid CLI tool (closes #521)

* fix: show quota auth expired message for Kiro social auth accounts (closes #588)
2026-04-17 12:24:35 +07:00
anuragg-saxenaa
d0ace2a3cf fix(codex): await image URL fetches before sending to upstream (closes #575)
Remote HTTP(S) image URLs are fetched and inlined as base64 data URIs
in a new prefetchImages() step run before super.execute(), so the body
sent to Codex contains resolved image bytes instead of URLs the backend
cannot access.

Scope is limited to the Codex executor — base executor and other
providers are untouched.

Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
2026-04-17 12:15:10 +07:00
anuragg-saxenaa
6e8aaab299 fix: strip enumDescriptions from tool schema in antigravity-to-openai (closes #566)
Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
2026-04-17 12:15:10 +07:00
decolua
3977edc460 Enhance error formatting to include low-level cause details, update HeaderMenu to use MenuItem component for better structure, and improve LanguageSwitcher to support controlled open state. Update subproject commit status. 2026-04-17 12:15:10 +07:00
decolua
75c4598da0 Add marked package, update Qwen executor for OAuth handling, and enhance changelog styles 2026-04-17 12:15:10 +07:00
Anurag Saxena
3badf1cbb6 fix: add Blackbox AI as a supported provider (closes #599) (#630)
* fix: add multi-model support for Factory Droid CLI tool (closes #521)

* Add Claude Opus 4.7 to cc and cl provider lists

RESEARCH confirmed GA release April 16, 2026. Adding to:
- cc (Claude Code): claude-opus-4-7
- cl (Cline): anthropic/claude-opus-4.7

Refs: TICKET-20260416-ENG-O4.7-001

* fix: add Blackbox AI as a supported provider (closes #599)
2026-04-17 12:06:00 +07:00
Anurag Saxena
554bbfceed fix: strip temperature parameter for gpt-5.4 model (closes #536) (#612) 2026-04-17 12:02:53 +07:00
Anurag Saxena
aa67198e75 fix: add GLM-5 and MiniMax-M2.5 models to Kiro provider (closes #580) (#611) 2026-04-17 12:02:07 +07:00
decolua
f5aa8215b4 Fix bug 2026-04-15 11:46:47 +07:00
decolua
b669b6ffc1 Refactor error handling to config-driven approach with centralized error rules
Made-with: Cursor
2026-04-15 11:46:47 +07:00
Dang Dinh Quan
b1288c5064 * feat(kiro): wire aws identity center device flow into provider oauth (#587)
* feat(kiro): wire aws identity center device flow into provider oauth

Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2026-04-15 11:44:46 +07:00
decolua
04cdb75839 Add proactive token refresh lead times for providers and implement Codex proxy management 2026-04-14 11:41:06 +07:00
decolua
4bff21cb80 Enhance CodexExecutor with compact URL support 2026-04-14 10:18:27 +07:00
decolua
6a6e2fcd77 Fix : noAuth support for providers and adjusted MITM restart settings. 2026-04-14 10:14:50 +07:00
decolua
3b1a608e8d Fix codex cache session id 2026-04-13 15:52:01 +07:00
decolua
4c28a1671d Enhance provider models and chat handling with new thinking configurations 2026-04-13 12:04:57 +07:00
decolua
89eb26dee2 Enhance proxy functionality with Vercel relay support 2026-04-13 10:08:24 +07:00
decolua
b3feb96740 Enhance TTS functionality and security settings
- Integrated Google TTS languages from a separate module for better maintainability.
- Updated local device voice fetching to support both macOS and Windows, improving cross-platform compatibility.
- Enhanced dashboard route protection by adding dynamic settings for login requirements and tunnel access.
- Introduced UI elements for managing security settings related to API key requirements and dashboard access via tunnel.
- Added default TTS response example in the media provider page for better user guidance.
- Updated constants to reflect changes in TTS provider configurations.

This commit improves the overall user experience and security of the TTS features.
2026-04-11 14:56:35 +07:00
decolua
875a1282ea Fix bug 2026-04-11 11:36:33 +07:00
decolua
ed17a8ffac Feat : Tailscale 2026-04-11 11:36:33 +07:00
Omar Nahhas Sanchez
878cdf302b fix: only strip reasoning_content when content is non-empty (#542)
sseToJsonHandler.js unconditionally deleted reasoning_content from all
non-streaming responses (added for Firecrawl SDK compatibility). This
breaks thinking models (Qwen3.5, Claude extended thinking, etc.) where
the model may use all tokens for reasoning, leaving content empty.

When reasoning_content is stripped in that case, the response appears
completely empty to the client.

Fix: only strip reasoning_content when the response also has non-empty
content, so that reasoning output is preserved when it is the only
useful output.

Co-authored-by: Agent Zero <agent@agent-zero.local>
2026-04-10 10:28:58 +07:00
decolua
3c96e8d6d1 Feat : tts 2026-04-10 10:17:53 +07:00
decolua
39545cf4c8 Fix combo modal 2026-04-09 10:16:11 +07:00
Anurag Saxena
23abe1a7bb fix: merge consecutive userInputMessages in openai-to-kiro translator (closes #510) (#524) 2026-04-08 15:38:28 +07:00
Payne
32a746181a fix: update Cursor client version to 3.1.0 for Composer 2 compatibility (#525)
Cursor's API now rejects requests with outdated client versions,
returning [400]: Update Required for Composer 2. Bump
x-cursor-client-version from 2.3.41 to 3.1.0 across all three
locations where it is set.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 15:37:51 +07:00
decolua
401772cb9a Fix bug strip image 2026-04-07 10:18:59 +07:00
Anurag Saxena
a53ccf1343 fix: strip reasoning_content from non-streaming responses (closes #509) (#517) 2026-04-07 09:46:28 +07:00
decolua
307be3b63d Fix bug 2026-04-06 17:32:44 +07:00
decolua
57cfaccceb Fix ModelSelectModal 2026-04-05 21:25:00 +07:00
decolua
67e0db77da Fix : Updated Anthropic-Beta header. 2026-04-05 07:46:26 +07:00
decolua
1973fe5a83 fix(translator): correct thought signatures for AG, Gemini CLI, Vertex; fix missing Vertex response translator
- Add DEFAULT_THINKING_AG_SIGNATURE, DEFAULT_THINKING_GEMINI_CLI_SIGNATURE, DEFAULT_THINKING_VERTEX_SIGNATURE
- Rename DEFAULT_THINKING_GEMINI_SIGNATURE → DEFAULT_THINKING_AG_SIGNATURE for clarity
- Pass provider-specific signature into openaiToGeminiBase (AG vs Gemini CLI)
- Replace synthetic thoughtSignatures with Vertex-native signature in postProcessForVertex
- Register Vertex → OpenAI response translator (fixes empty Vertex streaming responses)

Made-with: Cursor
2026-04-05 00:38:36 +07:00
kwanLeeFrmVi
666aecfc7c feat(translator): lossless passthrough via CLI tool + provider pairing
Add clientDetector utility to identify CLI tools (Claude Code, Gemini CLI,
Antigravity, Codex) from request headers. When the CLI tool and provider
are a native pair, skip all translation — only swap model and Bearer token.

Made-with: Cursor
2026-04-04 23:48:58 +07:00
decolua
333e704b2a MODEL_CAPS 2026-04-04 23:24:24 +07:00
decolua
2b1faeb2c6 Fix : Qwen provider 2026-04-04 12:38:39 +07:00
Anurag Saxena
5fe2c81cf9 fix: skip function_call items with empty/missing name to prevent Codex 400 error (closes #444) (#487) 2026-04-04 08:51:00 +07:00
Anurag Saxena
38eabae2e7 fix: retry /responses endpoint when GitHub returns model not supported (closes #470) (#488) 2026-04-04 08:50:38 +07:00
decolua
93b8668e9e Fix AG 2026-04-01 11:48:38 +07:00
decolua
9708541f6d Fix bug 2026-03-31 15:44:19 +07:00
Kwan96
ffa172c92d fix(claude-to-openai): emit closing </think> tag instead of empty reasoning_content (#454)
Replace empty reasoning_content with explicit </think> closing tag when exiting thinking block to properly signal end of reasoning section in streaming responses.
2026-03-31 09:21:11 +07:00
decolua
01787a3d5b Fix bug 2026-03-30 17:27:15 +07:00
kwanLeeFrmVi
054facb08b fix(gemini): preserve thoughtSignature via tool_call ID smuggling + fix ELOCKED mutex
- Encode thoughtSignature into tool_call.id using _TSIG_ delimiter and base64url
- Decode _TSIG_ on request to restore thoughtSignature for Gemini multi-turn thinking
- Track pendingThoughtSignature across parts for deferred signature attachment
- Add LocalMutex (2-layer locking) to prevent ELOCKED on concurrent DB access
- Increase lockfile retries from 5 to 15 for multi-process robustness
- Restore db.json seed on first run to prevent ENOENT on lockfile.lock
- Use process.env.BASE_URL fallback in models test route
- Remove gemini-3-flash-lite-preview from provider models

Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes #450

Made-with: Cursor
2026-03-30 16:57:28 +07:00
kwanLeeFrmVi
1c160cc8d9 feat(claude-code): spoof TLS fingerprint and stabilize headers for Anthropic
- Add claudeHeaderCache.js to intercept and cache live Claude Code client headers
- Forward cached headers dynamically to api.anthropic.com via default.js
- Strip first-party identity headers (x-app, claude-code-* beta) for non-Anthropic upstreams
- Validate and sanitize tool call IDs to match Anthropic pattern (^[a-zA-Z0-9_-]+$)
- Skip thinking blocks when applying cache_control; fix max_tokens buffer (+1024)
- Strip cache_control from thinking blocks in openai-to-claude translator
- Comment out thoughtSignature in Gemini translator (kept for reference)
- Expand .gitignore to match all deploy*.sh variants

Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes #433

Made-with: Cursor
2026-03-30 16:27:28 +07:00
bitgineer
83354889cf fix: handle anthropic-compatible providers in BaseExecutor (#428)
The BaseExecutor's buildUrl() and buildHeaders() methods only handled
openai-compatible-* providers but not anthropic-compatible-* providers.
This caused Anthropic-compatible synthetic providers to fail API testing
by hitting the wrong endpoint (returning documentation instead of valid
API responses) and using incorrect auth headers.

Changes:
- Added buildUrl() handling for anthropic-compatible-* providers
  to append /messages path
- Added buildHeaders() handling for x-api-key header and
  anthropic-version for anthropic-compatible providers

Fixes #XXX

Co-authored-by: Bitgineer <bitgineer@bitgineer.shop>
2026-03-30 12:27:09 +07:00
Manuel B.
cd1e06b641 fix: add missing clientId to github provider config for OAuth token refresh (#442)
The github provider in open-sse/config/providers.js was missing clientId,
causing refreshGitHubToken() to send client_id=undefined on 401 retry.
Also guard against undefined clientSecret in both refresh implementations.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 12:24:31 +07:00
decolua
e6299eef56 Fix Bug 2026-03-30 12:21:24 +07:00
decolua
abbf8ec86f feat: add GitLab Duo and CodeBuddy support, update observability settings 2026-03-30 11:28:07 +07:00
decolua
11e6004fcb fix: correct finish_reason for tool calls in OpenAI Responses translator
Apply fix from PR #354 by @tannk4w to properly signal tool_calls finish_reason
when model emits tool calls, allowing OpenAI-compatible clients to continue with
tool result processing instead of stopping prematurely.

Refactored finish_reason logic into computeFinishReason() helper to eliminate
duplication and improve maintainability across flush and completion paths.

Co-authored-by: tannk4w <tannk@tmi-soft.vn>

Thanks to @tannk4w, @trungtq2799, @quanhavn, and @East-rayyy for the thorough
review and improvement suggestions on the original PR.

Made-with: Cursor
2026-03-28 14:45:07 +07:00
Anurag Saxena
fcc8320753 feat: add OpenCode provider support (#387)
Adds OpenCode (https://github.com/opencode-ai/opencode) as a supported
provider. OpenCode is an open-source terminal AI coding assistant with
an OpenAI-compatible API running locally.

Changes:
- open-sse/config/providers.js: add opencode baseUrl (localhost:4096)
  with openai format (fully compatible, no custom headers needed)
- open-sse/services/model.js: add 'oc' alias → opencode
- src/shared/constants/providers.js: add opencode to subscription
  providers with alias 'oc', icon 'terminal', color #E87040

Usage after setup: use model prefix 'oc/<model>' to route through
a running OpenCode instance (e.g. oc/claude-sonnet-4-5).

Closes #378
2026-03-27 11:17:23 +07:00
anuragg-saxenaa
f05d64e073 fix: use project-scoped Vertex URL for SA JSON auth and add ?alt=sse for streaming (closes #388)
When using SA JSON + Bearer token, Vertex AI requires a project-scoped URL.
The old code used the global publishers endpoint which only works with a raw API key,
causing RESOURCE_PROJECT_INVALID errors.

Changes in open-sse/executors/vertex.js buildUrl():
- SA JSON path: projects/{projectId}/locations/{location}/publishers/google/models/{model}:{action}
- Appends ?alt=sse for streaming on SA JSON path
- Location defaults to us-central1, overridable via providerSpecificData.location
- Raw API key path unchanged (global publishers + ?key= param)

Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
2026-03-27 11:03:49 +07:00