fix(github): preserve reasoning_effort for non-Claude models (#713)

The previous blanket strip in GithubExecutor.transformRequest removed
`thinking` AND `reasoning_effort` for every GitHub-routed model to avoid
Claude-on-Copilot 400s from OpenClaw. That regressed GPT-5 family support
(gh/gpt-5-mini honors reasoning_effort: low/medium/high).

Make supportsThinking(model) model-aware — return false only for Claude
models, so the strip fires only where the upstream actually rejects these
fields.

Benchmarks on /v1/chat/completions via GitHub Copilot:
  effort=(none) → 64 reasoning_tokens, ~2.0s
  effort=low    → 0  reasoning_tokens, ~1.55s
  effort=medium → 64 reasoning_tokens, ~1.9s
  effort=high   → 128 reasoning_tokens, ~2.2s

Made-with: Cursor
This commit is contained in:
omar-nahhas
2026-04-22 10:07:18 +07:00
committed by decolua
parent d8c0a7ef44
commit 95841f9a48

View File

@@ -114,10 +114,11 @@ export class GithubExecutor extends BaseExecutor {
return !/gpt-5\.4/i.test(model);
}
// GitHub Copilot /chat/completions doesn't support thinking/reasoning_effort.
// OpenClaw sends thinking: { type: "enabled" } for Claude models which causes 400.
supportsThinking() {
return false;
// GitHub Copilot /chat/completions rejects Claude-style thinking payloads
// (OpenClaw sends thinking: { type: "enabled" } → upstream 400).
// GPT-5 family on Copilot DOES honor reasoning_effort, so only strip for Claude. (#713)
supportsThinking(model) {
return !/claude/i.test(model);
}
transformRequest(model, body, stream, credentials) {