feat(translator): add thinking parameter support in OpenAI → Claude

Preserve thinking configuration when converting OpenAI requests to Claude format.

- Handle thinking.type with 'enabled' as default

- Preserve thinking.budget_tokens when present

- Preserve thinking.max_tokens when present

This enables proper thinking mode support for o1-series models

when routed through 9Router to Claude endpoints.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
(cherry picked from commit 65d80e9269cc6789cb1522b276e8b8399fddbcab)
This commit is contained in:
Blade096
2026-02-07 21:35:09 +08:00
parent 20a86d6561
commit 54e01d617d

View File

@@ -141,6 +141,15 @@ export function openaiToClaudeRequest(model, body, stream) {
result.tool_choice = convertOpenAIToolChoice(body.tool_choice);
}
// Thinking configuration
if (body.thinking) {
result.thinking = {
type: body.thinking.type || "enabled",
...(body.thinking.budget_tokens && { budget_tokens: body.thinking.budget_tokens }),
...(body.thinking.max_tokens && { max_tokens: body.thinking.max_tokens })
};
}
// Attach toolNameMap to result for response translation
if (toolNameMap.size > 0) {
result._toolNameMap = toolNameMap;