GitHub Copilot /chat/completions endpoint does not support the thinking
or reasoning_effort fields. OpenClaw sends thinking: { type: "enabled" }
for Claude models which causes a 400 Bad Request.
Added supportsThinking() and strip both fields in transformRequest before
sending to the upstream endpoint.
Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
* fix: add multi-model support for Factory Droid CLI tool (closes#521)
* fix: show quota auth expired message for Kiro social auth accounts (closes#588)
Remote HTTP(S) image URLs are fetched and inlined as base64 data URIs
in a new prefetchImages() step run before super.execute(), so the body
sent to Codex contains resolved image bytes instead of URLs the backend
cannot access.
Scope is limited to the Codex executor — base executor and other
providers are untouched.
Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
* fix: add multi-model support for Factory Droid CLI tool (closes#521)
* Add Claude Opus 4.7 to cc and cl provider lists
RESEARCH confirmed GA release April 16, 2026. Adding to:
- cc (Claude Code): claude-opus-4-7
- cl (Cline): anthropic/claude-opus-4.7
Refs: TICKET-20260416-ENG-O4.7-001
* fix: add Blackbox AI as a supported provider (closes#599)
- Integrated Google TTS languages from a separate module for better maintainability.
- Updated local device voice fetching to support both macOS and Windows, improving cross-platform compatibility.
- Enhanced dashboard route protection by adding dynamic settings for login requirements and tunnel access.
- Introduced UI elements for managing security settings related to API key requirements and dashboard access via tunnel.
- Added default TTS response example in the media provider page for better user guidance.
- Updated constants to reflect changes in TTS provider configurations.
This commit improves the overall user experience and security of the TTS features.
sseToJsonHandler.js unconditionally deleted reasoning_content from all
non-streaming responses (added for Firecrawl SDK compatibility). This
breaks thinking models (Qwen3.5, Claude extended thinking, etc.) where
the model may use all tokens for reasoning, leaving content empty.
When reasoning_content is stripped in that case, the response appears
completely empty to the client.
Fix: only strip reasoning_content when the response also has non-empty
content, so that reasoning output is preserved when it is the only
useful output.
Co-authored-by: Agent Zero <agent@agent-zero.local>
Cursor's API now rejects requests with outdated client versions,
returning [400]: Update Required for Composer 2. Bump
x-cursor-client-version from 2.3.41 to 3.1.0 across all three
locations where it is set.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add clientDetector utility to identify CLI tools (Claude Code, Gemini CLI,
Antigravity, Codex) from request headers. When the CLI tool and provider
are a native pair, skip all translation — only swap model and Bearer token.
Made-with: Cursor
Replace empty reasoning_content with explicit </think> closing tag when exiting thinking block to properly signal end of reasoning section in streaming responses.
- Encode thoughtSignature into tool_call.id using _TSIG_ delimiter and base64url
- Decode _TSIG_ on request to restore thoughtSignature for Gemini multi-turn thinking
- Track pendingThoughtSignature across parts for deferred signature attachment
- Add LocalMutex (2-layer locking) to prevent ELOCKED on concurrent DB access
- Increase lockfile retries from 5 to 15 for multi-process robustness
- Restore db.json seed on first run to prevent ENOENT on lockfile.lock
- Use process.env.BASE_URL fallback in models test route
- Remove gemini-3-flash-lite-preview from provider models
Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes#450
Made-with: Cursor
- Add claudeHeaderCache.js to intercept and cache live Claude Code client headers
- Forward cached headers dynamically to api.anthropic.com via default.js
- Strip first-party identity headers (x-app, claude-code-* beta) for non-Anthropic upstreams
- Validate and sanitize tool call IDs to match Anthropic pattern (^[a-zA-Z0-9_-]+$)
- Skip thinking blocks when applying cache_control; fix max_tokens buffer (+1024)
- Strip cache_control from thinking blocks in openai-to-claude translator
- Comment out thoughtSignature in Gemini translator (kept for reference)
- Expand .gitignore to match all deploy*.sh variants
Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes#433
Made-with: Cursor
The BaseExecutor's buildUrl() and buildHeaders() methods only handled
openai-compatible-* providers but not anthropic-compatible-* providers.
This caused Anthropic-compatible synthetic providers to fail API testing
by hitting the wrong endpoint (returning documentation instead of valid
API responses) and using incorrect auth headers.
Changes:
- Added buildUrl() handling for anthropic-compatible-* providers
to append /messages path
- Added buildHeaders() handling for x-api-key header and
anthropic-version for anthropic-compatible providers
Fixes #XXX
Co-authored-by: Bitgineer <bitgineer@bitgineer.shop>
The github provider in open-sse/config/providers.js was missing clientId,
causing refreshGitHubToken() to send client_id=undefined on 401 retry.
Also guard against undefined clientSecret in both refresh implementations.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Apply fix from PR #354 by @tannk4w to properly signal tool_calls finish_reason
when model emits tool calls, allowing OpenAI-compatible clients to continue with
tool result processing instead of stopping prematurely.
Refactored finish_reason logic into computeFinishReason() helper to eliminate
duplication and improve maintainability across flush and completion paths.
Co-authored-by: tannk4w <tannk@tmi-soft.vn>
Thanks to @tannk4w, @trungtq2799, @quanhavn, and @East-rayyy for the thorough
review and improvement suggestions on the original PR.
Made-with: Cursor
Adds OpenCode (https://github.com/opencode-ai/opencode) as a supported
provider. OpenCode is an open-source terminal AI coding assistant with
an OpenAI-compatible API running locally.
Changes:
- open-sse/config/providers.js: add opencode baseUrl (localhost:4096)
with openai format (fully compatible, no custom headers needed)
- open-sse/services/model.js: add 'oc' alias → opencode
- src/shared/constants/providers.js: add opencode to subscription
providers with alias 'oc', icon 'terminal', color #E87040
Usage after setup: use model prefix 'oc/<model>' to route through
a running OpenCode instance (e.g. oc/claude-sonnet-4-5).
Closes#378
When using SA JSON + Bearer token, Vertex AI requires a project-scoped URL.
The old code used the global publishers endpoint which only works with a raw API key,
causing RESOURCE_PROJECT_INVALID errors.
Changes in open-sse/executors/vertex.js buildUrl():
- SA JSON path: projects/{projectId}/locations/{location}/publishers/google/models/{model}:{action}
- Appends ?alt=sse for streaming on SA JSON path
- Location defaults to us-central1, overridable via providerSpecificData.location
- Raw API key path unchanged (global publishers + ?key= param)
Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor