GitHub Copilot /chat/completions endpoint does not support the thinking
or reasoning_effort fields. OpenClaw sends thinking: { type: "enabled" }
for Claude models which causes a 400 Bad Request.
Added supportsThinking() and strip both fields in transformRequest before
sending to the upstream endpoint.
Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
* fix: add multi-model support for Factory Droid CLI tool (closes#521)
* fix: show quota auth expired message for Kiro social auth accounts (closes#588)
Remote HTTP(S) image URLs are fetched and inlined as base64 data URIs
in a new prefetchImages() step run before super.execute(), so the body
sent to Codex contains resolved image bytes instead of URLs the backend
cannot access.
Scope is limited to the Codex executor — base executor and other
providers are untouched.
Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
* fix: add multi-model support for Factory Droid CLI tool (closes#521)
* Add Claude Opus 4.7 to cc and cl provider lists
RESEARCH confirmed GA release April 16, 2026. Adding to:
- cc (Claude Code): claude-opus-4-7
- cl (Cline): anthropic/claude-opus-4.7
Refs: TICKET-20260416-ENG-O4.7-001
* fix: add Blackbox AI as a supported provider (closes#599)
- Integrated Google TTS languages from a separate module for better maintainability.
- Updated local device voice fetching to support both macOS and Windows, improving cross-platform compatibility.
- Enhanced dashboard route protection by adding dynamic settings for login requirements and tunnel access.
- Introduced UI elements for managing security settings related to API key requirements and dashboard access via tunnel.
- Added default TTS response example in the media provider page for better user guidance.
- Updated constants to reflect changes in TTS provider configurations.
This commit improves the overall user experience and security of the TTS features.
The fetchCompatibleModelIds() function had no timeout on its fetch() call,
causing /v1/models to hang indefinitely when an openai-compatible provider
was unreachable or slow to respond.
Additionally, upstream/cross-instance connections (provider IDs containing
a UUID suffix like openai-compatible-chat-XXXXXXXX) would trigger recursive
/models fetches between instances, creating infinite loops.
Fixes:
- Add AbortController with 5-second timeout to the fetch() call
- Skip dynamic model fetching for upstream/cross-instance connections
(detected by UUID suffix pattern in provider ID)
- Existing try/catch already handles abort errors gracefully
Co-authored-by: Agent Zero <agent@agent-zero.local>
sseToJsonHandler.js unconditionally deleted reasoning_content from all
non-streaming responses (added for Firecrawl SDK compatibility). This
breaks thinking models (Qwen3.5, Claude extended thinking, etc.) where
the model may use all tokens for reasoning, leaving content empty.
When reasoning_content is stripped in that case, the response appears
completely empty to the client.
Fix: only strip reasoning_content when the response also has non-empty
content, so that reasoning output is preserved when it is the only
useful output.
Co-authored-by: Agent Zero <agent@agent-zero.local>
- Allow selecting multiple models for 9router provider
- Merge models instead of overwriting (backwards-compatible)
- Support setting active model via click or Apply
- Click active model to clear default (model = '')
- Remove individual models via X button (persists to config)
- Add PATCH endpoint for clearing active model
- Update GET to return normalized model list
- Preserve subagent model configuration for explorer subagent
Made-with: Cursor
- Replaced `bun run build:bun` with `npm run build` in Dockerfile for consistency.
- Enhanced `DOCKER.md` to include `DATA_DIR` environment variable usage for database persistence.
- Clarified paths for container and host data storage.