Feat Kiro OAuth, Fix Codex

This commit is contained in:
decolua
2026-01-15 18:29:47 +07:00
parent c208f244ee
commit 26b61e5fbb
25 changed files with 1857 additions and 79 deletions

View File

@@ -0,0 +1,120 @@
// Default instructions for Codex models
// Source: CLIProxyAPI internal/misc/codex_instructions/
export const CODEX_DEFAULT_INSTRUCTIONS = `You are Codex, based on GPT-5. You are running as a coding agent in the Codex CLI on a user's computer.
## General
- When searching for text or files, prefer using \`rg\` or \`rg --files\` respectively because \`rg\` is much faster than alternatives like \`grep\`. (If the \`rg\` command is not found, then use alternatives.)
## Editing constraints
- Default to ASCII when editing or creating files. Only introduce non-ASCII or other Unicode characters when there is a clear justification and the file already uses them.
- Add succinct code comments that explain what is going on if code is not self-explanatory. You should not add comments like "Assigns the value to the variable", but a brief comment might be useful ahead of a complex code block that the user would otherwise have to spend time parsing out. Usage of these comments should be rare.
- Try to use apply_patch for single file edits, but it is fine to explore other options to make the edit if it does not work well. Do not use apply_patch for changes that are auto-generated (i.e. generating package.json or running a lint or format command like gofmt) or when scripting is more efficient (such as search and replacing a string across a codebase).
- You may be in a dirty git worktree.
* NEVER revert existing changes you did not make unless explicitly requested, since these changes were made by the user.
* If asked to make a commit or code edits and there are unrelated changes to your work or changes that you didn't make in those files, don't revert those changes.
* If the changes are in files you've touched recently, you should read carefully and understand how you can work with the changes rather than reverting them.
* If the changes are in unrelated files, just ignore them and don't revert them.
- Do not amend a commit unless explicitly requested to do so.
- While you are working, you might notice unexpected changes that you didn't make. If this happens, STOP IMMEDIATELY and ask the user how they would like to proceed.
- **NEVER** use destructive commands like \`git reset --hard\` or \`git checkout --\` unless specifically requested or approved by the user.
## Plan tool
When using the planning tool:
- Skip using the planning tool for straightforward tasks (roughly the easiest 25%).
- Do not make single-step plans.
- When you made a plan, update it after having performed one of the sub-tasks that you shared on the plan.
## Codex CLI harness, sandboxing, and approvals
The Codex CLI harness supports several different configurations for sandboxing and escalation approvals that the user can choose from.
Filesystem sandboxing defines which files can be read or written. The options for \`sandbox_mode\` are:
- **read-only**: The sandbox only permits reading files.
- **workspace-write**: The sandbox permits reading files, and editing files in \`cwd\` and \`writable_roots\`. Editing files in other directories requires approval.
- **danger-full-access**: No filesystem sandboxing - all commands are permitted.
Network sandboxing defines whether network can be accessed without approval. Options for \`network_access\` are:
- **restricted**: Requires approval
- **enabled**: No approval needed
Approvals are your mechanism to get user consent to run shell commands without the sandbox. Possible configuration options for \`approval_policy\` are
- **untrusted**: The harness will escalate most commands for user approval, apart from a limited allowlist of safe "read" commands.
- **on-failure**: The harness will allow all commands to run in the sandbox (if enabled), and failures will be escalated to the user for approval to run again without the sandbox.
- **on-request**: Commands will be run in the sandbox by default, and you can specify in your tool call if you want to escalate a command to run without sandboxing. (Note that this mode is not always available. If it is, you'll see parameters for it in the \`shell\` command description.)
- **never**: This is a non-interactive mode where you may NEVER ask the user for approval to run commands. Instead, you must always persist and work around constraints to solve the task for the user. You MUST do your utmost best to finish the task and validate your work before yielding. If this mode is paired with \`danger-full-access\`, take advantage of it to deliver the best outcome for the user. Further, in this mode, your default testing philosophy is overridden: Even if you don't see local patterns for testing, you may add tests and scripts to validate your work. Just remove them before yielding.
When you are running with \`approval_policy == on-request\`, and sandboxing enabled, here are scenarios where you'll need to request approval:
- You need to run a command that writes to a directory that requires it (e.g. running tests that write to /var)
- You need to run a GUI app (e.g., open/xdg-open/osascript) to open browsers or files.
- You are running sandboxed and need to run a command that requires network access (e.g. installing packages)
- If you run a command that is important to solving the user's query, but it fails because of sandboxing, rerun the command with approval. ALWAYS proceed to use the \`sandbox_permissions\` and \`justification\` parameters - do not message the user before requesting approval for the command.
- You are about to take a potentially destructive action such as an \`rm\` or \`git reset\` that the user did not explicitly ask for
- (for all of these, you should weigh alternative paths that do not require approval)
When \`sandbox_mode\` is set to read-only, you'll need to request approval for any command that isn't a read.
You will be told what filesystem sandboxing, network sandboxing, and approval mode are active in a developer or user message. If you are not told about this, assume that you are running with workspace-write, network sandboxing enabled, and approval on-failure.
Although they introduce friction to the user because your work is paused until the user responds, you should leverage them when necessary to accomplish important work. If the completing the task requires escalated permissions, Do not let these settings or the sandbox deter you from attempting to accomplish the user's task unless it is set to "never", in which case never ask for approvals.
When requesting approval to execute a command that will require escalated privileges:
- Provide the \`sandbox_permissions\` parameter with the value \`"require_escalated"\`
- Include a short, 1 sentence explanation for why you need escalated permissions in the justification parameter
## Special user requests
- If the user makes a simple request (such as asking for the time) which you can fulfill by running a terminal command (such as \`date\`), you should do so.
- If the user asks for a "review", default to a code review mindset: prioritise identifying bugs, risks, behavioural regressions, and missing tests. Findings must be the primary focus of the response - keep summaries or overviews brief and only after enumerating the issues. Present findings first (ordered by severity with file/line references), follow with open questions or assumptions, and offer a change-summary only as a secondary detail. If no findings are discovered, state that explicitly and mention any residual risks or testing gaps.
## Frontend tasks
When doing frontend design tasks, avoid collapsing into "AI slop" or safe, average-looking layouts.
Aim for interfaces that feel intentional, bold, and a bit surprising.
- Typography: Use expressive, purposeful fonts and avoid default stacks (Inter, Roboto, Arial, system).
- Color & Look: Choose a clear visual direction; define CSS variables; avoid purple-on-white defaults. No purple bias or dark mode bias.
- Motion: Use a few meaningful animations (page-load, staggered reveals) instead of generic micro-motions.
- Background: Don't rely on flat, single-color backgrounds; use gradients, shapes, or subtle patterns to build atmosphere.
- Overall: Avoid boilerplate layouts and interchangeable UI patterns. Vary themes, type families, and visual languages across outputs.
- Ensure the page loads properly on both desktop and mobile
Exception: If working within an existing website or design system, preserve the established patterns, structure, and visual language.
## Presenting your work and final message
You are producing plain text that will later be styled by the CLI. Follow these rules exactly. Formatting should make results easy to scan, but not feel mechanical. Use judgment to decide how much structure adds value.
- Default: be very concise; friendly coding teammate tone.
- Ask only when needed; suggest ideas; mirror the user's style.
- For substantial work, summarize clearly; follow finalanswer formatting.
- Skip heavy formatting for simple confirmations.
- Don't dump large files you've written; reference paths only.
- No "save/copy this file" - User is on the same machine.
- Offer logical next steps (tests, commits, build) briefly; add verify steps if you couldn't do something.
- For code changes:
* Lead with a quick explanation of the change, and then give more details on the context covering where and why a change was made. Do not start this explanation with "summary", just jump right in.
* If there are natural next steps the user may want to take, suggest them at the end of your response. Do not make suggestions if there are no natural next steps.
* When suggesting multiple options, use numeric lists for the suggestions so the user can quickly respond with a single number.
- The user does not command execution outputs. When asked to show the output of a command (e.g. \`git show\`), relay the important details in your answer or summarize the key lines so the user understands the result.
### Final answer structure and style guidelines
- Plain text; CLI handles styling. Use structure only when it helps scanability.
- Headers: optional; short Title Case (1-3 words) wrapped in **…**; no blank line before the first bullet; add only if they truly help.
- Bullets: use - ; merge related points; keep to one line when possible; 46 per list ordered by importance; keep phrasing consistent.
- Monospace: backticks for commands/paths/env vars/code ids and inline examples; use for literal keyword bullets; never combine with **.
- Code samples or multi-line snippets should be wrapped in fenced code blocks; include an info string as often as possible.
- Structure: group related bullets; order sections general → specific → supporting; for subsections, start with a bolded keyword bullet, then items; match complexity to the task.
- Tone: collaborative, concise, factual; present tense, active voice; selfcontained; no "above/below"; parallel wording.
- Don'ts: no nested bullets/hierarchies; no ANSI codes; don't cram unrelated keywords; keep keyword lists short—wrap/reformat if long; avoid naming formatting styles in answers.
- Adaptation: code explanations → precise, structured with code refs; simple tasks → lead with outcome; big changes → logical walkthrough + rationale + next actions; casual one-offs → plain sentences, no headers/bullets.
- File References: When referencing files in your response follow the below rules:
* Use inline code to make file paths clickable.
* Each reference should have a stand alone path. Even if it's the same file.
* Accepted: absolute, workspacerelative, a/ or b/ diff prefixes, or bare filename/suffix.
* Optionally include line/column (1based): :line[:column] or #Lline[Ccolumn] (column defaults to 1).
* Do not use URIs like file://, vscode://, or https://.
* Do not provide range of lines
* Examples: src/app.ts, src/app.ts:42, b/server/index.js#L10, C:\\repo\\project\\main.rs:12:5`;

View File

@@ -37,7 +37,7 @@ export const PROVIDERS = {
},
codex: {
baseUrl: "https://chatgpt.com/backend-api/codex/responses",
format: "codex",
format: "openai-responses", // Use OpenAI Responses API format (reuse translator)
headers: {
"Version": "0.21.0",
"Openai-Beta": "responses=experimental",
@@ -135,6 +135,20 @@ export const PROVIDERS = {
"Accept": "application/json",
"Content-Type": "application/json"
}
},
kiro: {
baseUrl: "https://codewhisperer.us-east-1.amazonaws.com/generateAssistantResponse",
format: "kiro",
headers: {
"Content-Type": "application/json",
"Accept": "application/vnd.amazon.eventstream",
"X-Amz-Target": "AmazonCodeWhispererStreamingService.GenerateAssistantResponse",
"User-Agent": "AWS-SDK-JS/3.0.0 kiro-ide/1.0.0",
"X-Amz-User-Agent": "aws-sdk-js/3.0.0 kiro-ide/1.0.0"
},
// Kiro OAuth endpoints
tokenUrl: "https://prod.us-east-1.auth.desktop.kiro.dev/refreshToken",
authUrl: "https://prod.us-east-1.auth.desktop.kiro.dev"
}
};

View File

@@ -69,6 +69,11 @@ export const PROVIDER_MODELS = {
{ id: "gemini-2.5-pro", name: "Gemini 2.5 Pro" },
{ id: "grok-code-fast-1", name: "Grok Code Fast 1" },
],
kr: [ // Kiro AI
{ id: "claude-opus-4.5", name: "Claude Opus 4.5" },
{ id: "claude-sonnet-4.5", name: "Claude Sonnet 4.5" },
{ id: "claude-haiku-4.5", name: "Claude Haiku 4.5" },
],
// API Key Providers (alias = id)
openai: [
@@ -145,6 +150,7 @@ export const PROVIDER_ID_TO_ALIAS = {
iflow: "if",
antigravity: "ag",
github: "gh",
kiro: "kr",
openai: "openai",
anthropic: "anthropic",
gemini: "gemini",

View File

@@ -0,0 +1,28 @@
import { BaseExecutor } from "./base.js";
import { CODEX_DEFAULT_INSTRUCTIONS } from "../config/codexInstructions.js";
import { PROVIDERS } from "../config/constants.js";
/**
* Codex Executor - handles OpenAI Codex API (Responses API format)
* Automatically injects default instructions if missing
*/
export class CodexExecutor extends BaseExecutor {
constructor() {
super("codex", PROVIDERS.codex);
}
/**
* Transform request before sending - inject default instructions if missing
*/
transformRequest(model, body, stream, credentials) {
// If no instructions provided, inject default Codex instructions
if (!body.instructions || body.instructions.trim() === "") {
body.instructions = CODEX_DEFAULT_INSTRUCTIONS;
}
// Ensure store is false (Codex requirement)
body.store = false;
return body;
}
}

View File

@@ -51,7 +51,8 @@ export class DefaultExecutor extends BaseExecutor {
codex: () => this.refreshWithForm(OAUTH_ENDPOINTS.openai.token, { grant_type: "refresh_token", refresh_token: credentials.refreshToken, client_id: PROVIDERS.codex.clientId, scope: "openid profile email offline_access" }),
qwen: () => this.refreshWithForm(OAUTH_ENDPOINTS.qwen.token, { grant_type: "refresh_token", refresh_token: credentials.refreshToken, client_id: PROVIDERS.qwen.clientId }),
iflow: () => this.refreshIflow(credentials.refreshToken),
gemini: () => this.refreshGoogle(credentials.refreshToken)
gemini: () => this.refreshGoogle(credentials.refreshToken),
kiro: () => this.refreshKiro(credentials.refreshToken)
};
const refresher = refreshers[this.provider];
@@ -111,6 +112,17 @@ export class DefaultExecutor extends BaseExecutor {
const tokens = await response.json();
return { accessToken: tokens.access_token, refreshToken: tokens.refresh_token || refreshToken, expiresIn: tokens.expires_in };
}
async refreshKiro(refreshToken) {
const response = await fetch(PROVIDERS.kiro.tokenUrl, {
method: "POST",
headers: { "Content-Type": "application/json", "Accept": "application/json", "User-Agent": "kiro-cli/1.0.0" },
body: JSON.stringify({ refreshToken })
});
if (!response.ok) return null;
const tokens = await response.json();
return { accessToken: tokens.accessToken, refreshToken: tokens.refreshToken || refreshToken, expiresIn: tokens.expiresIn };
}
}
export default DefaultExecutor;

View File

@@ -1,12 +1,16 @@
import { AntigravityExecutor } from "./antigravity.js";
import { GeminiCLIExecutor } from "./gemini-cli.js";
import { GithubExecutor } from "./github.js";
import { KiroExecutor } from "./kiro.js";
import { CodexExecutor } from "./codex.js";
import { DefaultExecutor } from "./default.js";
const executors = {
antigravity: new AntigravityExecutor(),
"gemini-cli": new GeminiCLIExecutor(),
github: new GithubExecutor()
github: new GithubExecutor(),
kiro: new KiroExecutor(),
codex: new CodexExecutor()
};
const defaultCache = new Map();
@@ -25,4 +29,6 @@ export { BaseExecutor } from "./base.js";
export { AntigravityExecutor } from "./antigravity.js";
export { GeminiCLIExecutor } from "./gemini-cli.js";
export { GithubExecutor } from "./github.js";
export { KiroExecutor } from "./kiro.js";
export { CodexExecutor } from "./codex.js";
export { DefaultExecutor } from "./default.js";

421
open-sse/executors/kiro.js Normal file
View File

@@ -0,0 +1,421 @@
import { BaseExecutor } from "./base.js";
import { PROVIDERS } from "../config/constants.js";
import { v4 as uuidv4 } from "uuid";
/**
* KiroExecutor - Executor for Kiro AI (AWS CodeWhisperer)
* Uses AWS CodeWhisperer streaming API with AWS EventStream binary format
*/
export class KiroExecutor extends BaseExecutor {
constructor() {
super("kiro", PROVIDERS.kiro);
}
buildHeaders(credentials, stream = true) {
const headers = {
...this.config.headers,
"Amz-Sdk-Request": "attempt=1; max=3",
"Amz-Sdk-Invocation-Id": uuidv4()
};
if (credentials.accessToken) {
headers["Authorization"] = `Bearer ${credentials.accessToken}`;
}
return headers;
}
transformRequest(model, body, stream, credentials) {
return body;
}
/**
* Custom execute for Kiro - handles AWS EventStream binary response
*/
async execute({ model, body, stream, credentials, signal, log }) {
const url = this.buildUrl(model, stream, 0);
const headers = this.buildHeaders(credentials, stream);
const transformedBody = this.transformRequest(model, body, stream, credentials);
const response = await fetch(url, {
method: "POST",
headers,
body: JSON.stringify(transformedBody),
signal
});
if (!response.ok) {
return { response, url, headers, transformedBody };
}
// For Kiro, we need to transform the binary EventStream to SSE
// Create a TransformStream to convert binary to SSE text
const transformedResponse = this.transformEventStreamToSSE(response, model);
return { response: transformedResponse, url, headers, transformedBody };
}
/**
* Transform AWS EventStream binary response to SSE text stream
*/
transformEventStreamToSSE(response, model) {
const reader = response.body.getReader();
let buffer = new Uint8Array(0);
let chunkIndex = 0;
const responseId = `chatcmpl-${Date.now()}`;
const created = Math.floor(Date.now() / 1000);
const state = {
endDetected: false,
finishEmitted: false,
hasToolCalls: false,
toolCallIndex: 0,
seenToolIds: new Map() // Map toolUseId -> index
};
const stream = new ReadableStream({
async pull(controller) {
try {
const { done, value } = await reader.read();
if (done) {
// Emit finish_reason chunk if not already sent
if (!state.finishEmitted) {
state.finishEmitted = true;
const finishChunk = {
id: responseId,
object: "chat.completion.chunk",
created,
model,
choices: [{
index: 0,
delta: {},
finish_reason: state.hasToolCalls ? "tool_calls" : "stop"
}]
};
controller.enqueue(new TextEncoder().encode(`data: ${JSON.stringify(finishChunk)}\n\n`));
}
// Send final done message
controller.enqueue(new TextEncoder().encode("data: [DONE]\n\n"));
controller.close();
return;
}
// Append to buffer
const newBuffer = new Uint8Array(buffer.length + value.length);
newBuffer.set(buffer);
newBuffer.set(value, buffer.length);
buffer = newBuffer;
// Parse events from buffer
while (buffer.length >= 16) {
const view = new DataView(buffer.buffer, buffer.byteOffset);
const totalLength = view.getUint32(0, false);
if (totalLength < 16 || buffer.length < totalLength) break;
// Extract event
const eventData = buffer.slice(0, totalLength);
buffer = buffer.slice(totalLength);
// Parse event headers and payload
const event = parseEventFrame(eventData);
if (!event) continue;
const eventType = event.headers[":event-type"] || "";
// Handle assistantResponseEvent
if (eventType === "assistantResponseEvent" && event.payload?.content) {
const chunk = {
id: responseId,
object: "chat.completion.chunk",
created,
model,
choices: [{
index: 0,
delta: chunkIndex === 0
? { role: "assistant", content: event.payload.content }
: { content: event.payload.content },
finish_reason: null
}]
};
chunkIndex++;
controller.enqueue(new TextEncoder().encode(`data: ${JSON.stringify(chunk)}\n\n`));
}
// Handle codeEvent
if (eventType === "codeEvent" && event.payload?.content) {
const chunk = {
id: responseId,
object: "chat.completion.chunk",
created,
model,
choices: [{
index: 0,
delta: { content: event.payload.content },
finish_reason: null
}]
};
chunkIndex++;
controller.enqueue(new TextEncoder().encode(`data: ${JSON.stringify(chunk)}\n\n`));
}
// Handle toolUseEvent
if (eventType === "toolUseEvent" && event.payload) {
console.log("[KIRO DEBUG] toolUseEvent payload:", JSON.stringify(event.payload, null, 2));
state.hasToolCalls = true; // Track that we have tool calls
const toolUse = event.payload;
// AWS Kiro sends toolUse as object or array
// If it's an array, process each tool separately
const toolUses = Array.isArray(toolUse) ? toolUse : [toolUse];
for (const singleToolUse of toolUses) {
const toolCallId = singleToolUse.toolUseId || `call_${Date.now()}`;
const toolName = singleToolUse.name || "";
const toolInput = singleToolUse.input; // Can be undefined, string, or object
// Get or assign tool call index
let toolIndex;
const isNewTool = !state.seenToolIds.has(toolCallId);
if (isNewTool) {
// NEW TOOL: Create start chunk
toolIndex = state.toolCallIndex++;
state.seenToolIds.set(toolCallId, toolIndex);
const startChunk = {
id: responseId,
object: "chat.completion.chunk",
created,
model,
choices: [{
index: 0,
delta: {
...(chunkIndex === 0 ? { role: "assistant" } : {}),
tool_calls: [{
index: toolIndex,
id: toolCallId,
type: "function",
function: {
name: toolName,
arguments: ""
}
}]
},
finish_reason: null
}]
};
chunkIndex++;
controller.enqueue(new TextEncoder().encode(`data: ${JSON.stringify(startChunk)}\n\n`));
} else {
// EXISTING TOOL: Get its index
toolIndex = state.seenToolIds.get(toolCallId);
}
// Emit arguments chunk if input exists
// AWS Kiro streams input as: undefined (first event) → string chunks
if (toolInput !== undefined) {
let argumentsStr;
if (typeof toolInput === 'string') {
// AWS Kiro sends partial JSON as STRING
argumentsStr = toolInput;
} else if (typeof toolInput === 'object') {
// Fallback: if it's an object, stringify it
argumentsStr = JSON.stringify(toolInput);
} else {
// Skip if not string or object
continue;
}
const argsChunk = {
id: responseId,
object: "chat.completion.chunk",
created,
model,
choices: [{
index: 0,
delta: {
tool_calls: [{
index: toolIndex,
function: {
arguments: argumentsStr
}
}]
},
finish_reason: null
}]
};
chunkIndex++;
controller.enqueue(new TextEncoder().encode(`data: ${JSON.stringify(argsChunk)}\n\n`));
}
}
}
// Handle messageStopEvent
if (eventType === "messageStopEvent") {
const chunk = {
id: responseId,
object: "chat.completion.chunk",
created,
model,
choices: [{
index: 0,
delta: {},
finish_reason: state.hasToolCalls ? "tool_calls" : "stop"
}]
};
state.finishEmitted = true;
controller.enqueue(new TextEncoder().encode(`data: ${JSON.stringify(chunk)}\n\n`));
}
// Detect end of stream: meteringEvent + contextUsageEvent usually come last
// Kiro doesn't always send messageStopEvent, so we need to detect completion
if ((eventType === "meteringEvent" || eventType === "contextUsageEvent") && !state.endDetected) {
state.endDetected = true;
// Schedule finish chunk emission after a short delay
setTimeout(() => {
if (!state.finishEmitted) {
state.finishEmitted = true;
const finishChunk = {
id: responseId,
object: "chat.completion.chunk",
created,
model,
choices: [{
index: 0,
delta: {},
finish_reason: state.hasToolCalls ? "tool_calls" : "stop"
}]
};
controller.enqueue(new TextEncoder().encode(`data: ${JSON.stringify(finishChunk)}\n\n`));
}
}, 100); // 100ms delay to check for more events
}
}
} catch (error) {
controller.error(error);
}
},
cancel() {
reader.cancel();
}
});
// Create new response with SSE headers
return new Response(stream, {
status: response.status,
statusText: response.statusText,
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Connection": "keep-alive"
}
});
}
async refreshCredentials(credentials, log) {
if (!credentials.refreshToken) return null;
try {
const response = await fetch(this.config.tokenUrl, {
method: "POST",
headers: {
"Content-Type": "application/json",
"User-Agent": "kiro-cli/1.0.0"
},
body: JSON.stringify({
refreshToken: credentials.refreshToken
})
});
if (!response.ok) {
log?.error?.("TOKEN", `Kiro refresh failed: ${response.status}`);
return null;
}
const data = await response.json();
const result = {
accessToken: data.accessToken,
refreshToken: data.refreshToken || credentials.refreshToken,
expiresIn: data.expiresIn || 3600
};
log?.info?.("TOKEN", "Kiro token refreshed");
return result;
} catch (error) {
log?.error?.("TOKEN", `Kiro refresh error: ${error.message}`);
return null;
}
}
}
/**
* Parse AWS EventStream frame
*/
function parseEventFrame(data) {
try {
const view = new DataView(data.buffer, data.byteOffset);
const headersLength = view.getUint32(4, false);
// Parse headers
const headers = {};
let offset = 12; // After prelude
const headerEnd = 12 + headersLength;
while (offset < headerEnd && offset < data.length) {
const nameLen = data[offset];
offset++;
if (offset + nameLen > data.length) break;
const name = new TextDecoder().decode(data.slice(offset, offset + nameLen));
offset += nameLen;
const headerType = data[offset];
offset++;
if (headerType === 7) { // String type
const valueLen = (data[offset] << 8) | data[offset + 1];
offset += 2;
if (offset + valueLen > data.length) break;
const value = new TextDecoder().decode(data.slice(offset, offset + valueLen));
offset += valueLen;
headers[name] = value;
} else {
break;
}
}
// Parse payload
const payloadStart = 12 + headersLength;
const payloadEnd = data.length - 4; // Exclude message CRC
let payload = null;
if (payloadEnd > payloadStart) {
const payloadStr = new TextDecoder().decode(data.slice(payloadStart, payloadEnd));
// Skip empty or whitespace-only payloads
if (!payloadStr || !payloadStr.trim()) {
return { headers, payload: null };
}
try {
payload = JSON.parse(payloadStr);
} catch (parseError) {
// Log parse error for debugging
console.warn(`[Kiro] Failed to parse payload: ${parseError.message} | payload: ${payloadStr.substring(0, 100)}`);
payload = { raw: payloadStr };
}
}
return { headers, payload };
} catch {
return null;
}
}
export default KiroExecutor;

View File

@@ -96,15 +96,6 @@ export async function handleChatCore({ body, modelInfo, credentials, log, onCred
// 1. Log raw request from client
reqLogger.logRawRequest(body);
// 1a. Log format detection info
reqLogger.logFormatInfo({
sourceFormat,
targetFormat,
provider,
model,
stream
});
log?.debug?.("FORMAT", `${sourceFormat}${targetFormat} | stream=${stream}`);

View File

@@ -21,7 +21,8 @@ export function getQuotaCooldown(backoffLevel = 0) {
export function checkFallbackError(status, errorText, backoffLevel = 0) {
// Check error message FIRST - specific patterns take priority over status codes
if (errorText) {
const lowerError = errorText.toLowerCase();
const errorStr = typeof errorText === "string" ? errorText : JSON.stringify(errorText);
const lowerError = errorStr.toLowerCase();
// "Request not allowed" - short cooldown (5s), takes priority over status code
if (lowerError.includes("request not allowed")) {

View File

@@ -7,6 +7,7 @@ const ALIAS_TO_PROVIDER_ID = {
if: "iflow",
ag: "antigravity",
gh: "github",
kr: "kiro",
// API Key providers (alias = id)
openai: "openai",
anthropic: "anthropic",

View File

@@ -240,6 +240,96 @@ export async function refreshCodexToken(refreshToken, log) {
};
}
/**
* Specialized refresh for Kiro (AWS CodeWhisperer) tokens
* Supports both AWS SSO OIDC (Builder ID/IDC) and Social Auth (Google/GitHub)
*/
export async function refreshKiroToken(refreshToken, providerSpecificData, log) {
const authMethod = providerSpecificData?.authMethod;
const clientId = providerSpecificData?.clientId;
const clientSecret = providerSpecificData?.clientSecret;
const region = providerSpecificData?.region;
// AWS SSO OIDC (Builder ID or IDC)
// If clientId and clientSecret exist, assume AWS SSO OIDC (default to builder-id if authMethod not specified)
if (clientId && clientSecret) {
const isIDC = authMethod === "idc";
const endpoint = isIDC && region
? `https://oidc.${region}.amazonaws.com/token`
: "https://oidc.us-east-1.amazonaws.com/token";
const response = await fetch(endpoint, {
method: "POST",
headers: {
"Content-Type": "application/json",
Accept: "application/json",
},
body: JSON.stringify({
clientId: clientId,
clientSecret: clientSecret,
refreshToken: refreshToken,
grantType: "refresh_token",
}),
});
if (!response.ok) {
const errorText = await response.text();
log?.error?.("TOKEN_REFRESH", "Failed to refresh Kiro AWS token", {
status: response.status,
error: errorText,
});
return null;
}
const tokens = await response.json();
log?.info?.("TOKEN_REFRESH", "Successfully refreshed Kiro AWS token", {
hasNewAccessToken: !!tokens.accessToken,
expiresIn: tokens.expiresIn,
});
return {
accessToken: tokens.accessToken,
refreshToken: tokens.refreshToken || refreshToken,
expiresIn: tokens.expiresIn,
};
}
// Social Auth (Google/GitHub) - use Kiro's refresh endpoint
const response = await fetch(PROVIDERS.kiro.tokenUrl, {
method: "POST",
headers: {
"Content-Type": "application/json",
Accept: "application/json",
},
body: JSON.stringify({
refreshToken: refreshToken,
}),
});
if (!response.ok) {
const errorText = await response.text();
log?.error?.("TOKEN_REFRESH", "Failed to refresh Kiro social token", {
status: response.status,
error: errorText,
});
return null;
}
const tokens = await response.json();
log?.info?.("TOKEN_REFRESH", "Successfully refreshed Kiro social token", {
hasNewAccessToken: !!tokens.accessToken,
expiresIn: tokens.expiresIn,
});
return {
accessToken: tokens.accessToken,
refreshToken: tokens.refreshToken || refreshToken,
expiresIn: tokens.expiresIn,
};
}
/**
* Specialized refresh for iFlow OAuth tokens
*/

View File

@@ -7,6 +7,7 @@ export const FORMATS = {
GEMINI: "gemini",
GEMINI_CLI: "gemini-cli",
CODEX: "codex",
ANTIGRAVITY: "antigravity"
ANTIGRAVITY: "antigravity",
KIRO: "kiro"
};

View File

@@ -163,10 +163,12 @@ export async function initTranslators() {
await import("./request/gemini-to-openai.js");
await import("./request/openai-to-gemini.js");
await import("./request/openai-responses.js");
await import("./request/openai-to-kiro.js");
// Response translators
await import("./response/claude-to-openai.js");
await import("./response/openai-to-claude.js");
await import("./response/gemini-to-openai.js");
await import("./response/openai-responses.js");
await import("./response/kiro-to-openai.js");
}

View File

@@ -131,6 +131,103 @@ function openaiResponsesToOpenAIRequest(model, body, stream, credentials) {
return result;
}
// Register
register(FORMATS.OPENAI_RESPONSES, FORMATS.OPENAI, openaiResponsesToOpenAIRequest, null);
/**
* Convert OpenAI Chat Completions to OpenAI Responses API format
*/
function openaiToOpenAIResponsesRequest(model, body, stream, credentials) {
const result = {
model,
input: [],
stream: true,
store: false
};
// Extract system message as instructions
let hasSystemMessage = false;
const messages = body.messages || [];
for (const msg of messages) {
if (msg.role === "system") {
// Use first system message as instructions
if (!hasSystemMessage) {
result.instructions = typeof msg.content === "string" ? msg.content : "";
hasSystemMessage = true;
}
continue; // Skip system messages in input
}
// Convert user/assistant messages to input items
if (msg.role === "user" || msg.role === "assistant") {
const contentType = msg.role === "user" ? "input_text" : "output_text";
const content = typeof msg.content === "string"
? [{ type: contentType, text: msg.content }]
: Array.isArray(msg.content)
? msg.content.map(c => {
if (c.type === "text") return { type: contentType, text: c.text };
if (c.type === "image_url") return { type: contentType, text: "[Image content]" };
return c;
})
: [];
result.input.push({
type: "message",
role: msg.role,
content
});
}
// Convert tool calls
if (msg.role === "assistant" && msg.tool_calls) {
for (const tc of msg.tool_calls) {
result.input.push({
type: "function_call",
call_id: tc.id,
name: tc.function?.name || "",
arguments: tc.function?.arguments || "{}"
});
}
}
// Convert tool results
if (msg.role === "tool") {
result.input.push({
type: "function_call_output",
call_id: msg.tool_call_id,
output: msg.content
});
}
}
// If no system message, leave instructions empty (will be filled by executor)
if (!hasSystemMessage) {
result.instructions = "";
}
// Convert tools format
if (body.tools && Array.isArray(body.tools)) {
result.tools = body.tools.map(tool => {
if (tool.type === "function") {
return {
type: "function",
name: tool.function.name,
description: tool.function.description,
parameters: tool.function.parameters,
strict: tool.function.strict
};
}
return tool;
});
}
// Pass through other relevant fields
if (body.temperature !== undefined) result.temperature = body.temperature;
if (body.max_tokens !== undefined) result.max_tokens = body.max_tokens;
if (body.top_p !== undefined) result.top_p = body.top_p;
return result;
}
// Register both directions
register(FORMATS.OPENAI_RESPONSES, FORMATS.OPENAI, openaiResponsesToOpenAIRequest, null);
register(FORMATS.OPENAI, FORMATS.OPENAI_RESPONSES, openaiToOpenAIResponsesRequest, null);

View File

@@ -0,0 +1,317 @@
/**
* OpenAI to Kiro Request Translator
* Converts OpenAI Chat Completions format to Kiro/AWS CodeWhisperer format
*/
import { register } from "../index.js";
import { FORMATS } from "../formats.js";
import { v4 as uuidv4 } from "uuid";
/**
* Convert OpenAI messages to Kiro format
*/
function convertMessages(messages, tools, model) {
let history = [];
let currentMessage = null;
let systemPrompt = "";
// Collect tool results first (they come as separate messages with role: "tool")
const toolResultsMap = new Map(); // Map tool_call_id -> content
for (const msg of messages) {
if (msg.role === "tool" && msg.tool_call_id) {
const content = typeof msg.content === "string" ? msg.content :
(Array.isArray(msg.content) ? msg.content.map(c => c.text || "").join("\n") : "");
toolResultsMap.set(msg.tool_call_id, content);
}
}
for (const msg of messages) {
const role = msg.role;
// Skip tool messages - already processed above
if (role === "tool") {
continue;
}
const content = typeof msg.content === "string" ? msg.content :
(Array.isArray(msg.content) ? msg.content.map(c => c.text || "").join("\n") : "");
if (role === "system") {
systemPrompt += (systemPrompt ? "\n" : "") + content;
continue;
}
if (role === "user") {
const userMsg = {
userInputMessage: {
content: content,
modelId: "", // Will be set later
origin: "AI_EDITOR"
}
};
// Add tools to first user message context
if (tools && tools.length > 0 && history.length === 0) {
userMsg.userInputMessage.userInputMessageContext = {
tools: tools.map(t => {
const name = t.function?.name || t.name;
let description = t.function?.description || t.description || "";
// CRITICAL: Kiro API requires non-empty description
if (!description.trim()) {
description = `Tool: ${name}`;
}
// Truncate long descriptions (Kiro max is ~5000 chars based on testing)
// Keep it reasonable but allow more detail than 2000 chars
// const maxDescLen = 5000;
// if (description.length > maxDescLen) {
// // Smart truncation: keep first 80% and add marker
// description = description.slice(0, maxDescLen - 100) + "\n\n[Note: Full description truncated for API limits. Tool functionality remains intact.]";
// }
return {
toolSpecification: {
name,
description,
inputSchema: {
json: t.function?.parameters || t.parameters || {}
}
}
};
})
};
}
currentMessage = userMsg;
history.push(userMsg);
}
if (role === "assistant") {
const assistantMsg = {
assistantResponseMessage: {
content: content
}
};
// Handle tool calls
if (msg.tool_calls && msg.tool_calls.length > 0) {
assistantMsg.assistantResponseMessage.toolUses = msg.tool_calls.map(tc => ({
toolUseId: tc.id || uuidv4(),
name: tc.function?.name || tc.name,
input: typeof tc.function?.arguments === "string"
? JSON.parse(tc.function.arguments)
: (tc.function?.arguments || tc.arguments || {})
}));
// Collect tool results for this assistant message's tool calls
const toolResults = [];
for (const tc of msg.tool_calls) {
const toolResult = toolResultsMap.get(tc.id);
if (toolResult !== undefined) {
toolResults.push({
content: [{ text: toolResult }],
status: "success",
toolUseId: tc.id
});
}
}
// Add tool results to the NEXT user message if they exist
if (toolResults.length > 0) {
// Store for next user message
assistantMsg._pendingToolResults = toolResults;
}
}
history.push(assistantMsg);
}
}
// Apply pending tool results to user messages
for (let i = 0; i < history.length; i++) {
if (history[i].assistantResponseMessage?._pendingToolResults) {
const toolResults = history[i].assistantResponseMessage._pendingToolResults;
delete history[i].assistantResponseMessage._pendingToolResults;
// Find next user message
for (let j = i + 1; j < history.length; j++) {
if (history[j].userInputMessage) {
if (!history[j].userInputMessage.userInputMessageContext) {
history[j].userInputMessage.userInputMessageContext = {};
}
history[j].userInputMessage.userInputMessageContext.toolResults = toolResults;
break;
}
}
}
}
// Also check currentMessage for pending tool results
if (history.length > 0 && history[history.length - 1].assistantResponseMessage?._pendingToolResults) {
const toolResults = history[history.length - 1].assistantResponseMessage._pendingToolResults;
delete history[history.length - 1].assistantResponseMessage._pendingToolResults;
if (currentMessage?.userInputMessage) {
if (!currentMessage.userInputMessage.userInputMessageContext) {
currentMessage.userInputMessage.userInputMessageContext = {};
}
currentMessage.userInputMessage.userInputMessageContext.toolResults = toolResults;
}
}
// Pop last message as currentMessage if it's user message
if (history.length > 0 && history[history.length - 1].userInputMessage) {
currentMessage = history.pop();
}
// Move tools from history to currentMessage if needed
const firstHistoryItem = history[0];
if (firstHistoryItem?.userInputMessage?.userInputMessageContext?.tools &&
!currentMessage?.userInputMessage?.userInputMessageContext?.tools) {
// Move tools to currentMessage
if (!currentMessage.userInputMessage.userInputMessageContext) {
currentMessage.userInputMessage.userInputMessageContext = {};
}
currentMessage.userInputMessage.userInputMessageContext.tools =
firstHistoryItem.userInputMessage.userInputMessageContext.tools;
console.log(`[Kiro Translator] Moved ${currentMessage.userInputMessage.userInputMessageContext.tools.length} tools to currentMessage`);
}
// CRITICAL: Clean up history for Kiro API compatibility
// Kiro API has strict limitations on history content:
// 1. NO toolUses in assistant messages (causes 400 Bad Request)
// 2. NO toolResults in user messages (causes 400 Bad Request)
// 3. NO tools definitions in history (only in currentMessage)
// 4. NO empty userInputMessageContext objects
// 5. modelId must NOT be empty string
// 6. NO consecutive user messages (must alternate user/assistant)
history.forEach(item => {
// Remove toolUses from assistant messages (Kiro doesn't support tool history)
if (item.assistantResponseMessage?.toolUses) {
delete item.assistantResponseMessage.toolUses;
}
// Remove tools from user messages (only currentMessage should have tools)
if (item.userInputMessage?.userInputMessageContext?.tools) {
delete item.userInputMessage.userInputMessageContext.tools;
}
// Remove toolResults from user messages (Kiro doesn't support passing tool results via history)
if (item.userInputMessage?.userInputMessageContext?.toolResults) {
delete item.userInputMessage.userInputMessageContext.toolResults;
}
// Remove empty userInputMessageContext
if (item.userInputMessage?.userInputMessageContext &&
Object.keys(item.userInputMessage.userInputMessageContext).length === 0) {
delete item.userInputMessage.userInputMessageContext;
}
// Ensure modelId is not empty (use model from params if empty)
if (item.userInputMessage && !item.userInputMessage.modelId) {
item.userInputMessage.modelId = model;
}
});
// CRITICAL: Merge consecutive user messages
// Kiro API requires alternating user/assistant pattern in history
const mergedHistory = [];
for (let i = 0; i < history.length; i++) {
const current = history[i];
// If current is user message and previous is also user message, merge them
if (current.userInputMessage &&
mergedHistory.length > 0 &&
mergedHistory[mergedHistory.length - 1].userInputMessage) {
// Merge content into previous user message
const prev = mergedHistory[mergedHistory.length - 1];
prev.userInputMessage.content += "\n\n" + current.userInputMessage.content;
console.log(`[Kiro Translator] Merged consecutive user messages in history`);
} else {
// Add normally
mergedHistory.push(current);
}
}
history = mergedHistory;
// Log payload size warning if system prompt is very long
const systemPromptSize = systemPrompt.length;
if (systemPromptSize > 10000) {
console.warn(`[Kiro Translator] WARNING: System prompt is ${systemPromptSize} chars. Total payload may be large.`);
}
return { history, currentMessage, systemPrompt };
}
/**
* Build Kiro payload from OpenAI format
*/
function buildKiroPayload(model, body, stream, credentials) {
const messages = body.messages || [];
const tools = body.tools || [];
const maxTokens = body.max_tokens || 32000;
const temperature = body.temperature;
const topP = body.top_p;
const { history, currentMessage, systemPrompt } = convertMessages(messages, tools, model);
// Get profileArn from credentials
const profileArn = credentials?.providerSpecificData?.profileArn || "";
// Inject system prompt into current message content
let finalContent = currentMessage?.userInputMessage?.content || "";
if (systemPrompt) {
// Log warning if system prompt is very long (may cause Kiro API to reject request)
if (systemPrompt.length > 10000) {
console.warn(`[Kiro Translator] WARNING: System prompt is very long (${systemPrompt.length} chars). Kiro API may reject requests with total content >20KB. Consider reducing system prompt length.`);
}
finalContent = `[System: ${systemPrompt}]\n\n${finalContent}`;
}
// Add timestamp context
const timestamp = new Date().toISOString();
finalContent = `[Context: Current time is ${timestamp}]\n\n${finalContent}`;
// Log final content size for debugging
if (finalContent.length > 20000) {
console.warn(`[Kiro Translator] WARNING: Final content size is ${finalContent.length} chars. Kiro API typically rejects requests >20-30KB.`);
}
const payload = {
conversationState: {
chatTriggerType: "MANUAL",
conversationId: uuidv4(),
currentMessage: {
userInputMessage: {
content: finalContent,
modelId: model,
origin: "AI_EDITOR",
...(currentMessage?.userInputMessage?.userInputMessageContext && {
userInputMessageContext: currentMessage.userInputMessage.userInputMessageContext
})
}
},
history: history
}
};
// Only add profileArn if available
if (profileArn) {
payload.profileArn = profileArn;
}
// Add inference config if specified
if (maxTokens || temperature !== undefined || topP !== undefined) {
payload.inferenceConfig = {};
if (maxTokens) payload.inferenceConfig.maxTokens = maxTokens;
if (temperature !== undefined) payload.inferenceConfig.temperature = temperature;
if (topP !== undefined) payload.inferenceConfig.topP = topP;
}
return payload;
}
// Register translator
register(FORMATS.OPENAI, FORMATS.KIRO, buildKiroPayload, null);
export { buildKiroPayload };

View File

@@ -0,0 +1,185 @@
/**
* Kiro to OpenAI Response Translator
* Converts Kiro/AWS CodeWhisperer streaming events to OpenAI SSE format
*/
import { register } from "../index.js";
import { FORMATS } from "../formats.js";
/**
* Parse Kiro SSE event and convert to OpenAI format
* Kiro events: assistantResponseEvent, codeEvent, supplementaryWebLinksEvent, etc.
*/
function convertKiroToOpenAI(chunk, state) {
if (!chunk) return null;
// If chunk is already in OpenAI format (from executor transform), return as-is
if (chunk.object === "chat.completion.chunk" && chunk.choices) {
return chunk;
}
console.log("chunk", chunk);
// Handle string chunk (raw SSE data)
let data = chunk;
if (typeof chunk === "string") {
// Parse SSE format: event:xxx\ndata:xxx
const lines = chunk.split("\n");
let eventType = "";
let eventData = "";
for (const line of lines) {
if (line.startsWith("event:")) {
eventType = line.slice(6).trim();
} else if (line.startsWith(":event-type:")) {
eventType = line.slice(12).trim();
} else if (line.startsWith("data:")) {
eventData = line.slice(5).trim();
} else if (line.startsWith(":content-type:")) {
// Skip content-type header
} else if (line.trim() && !line.startsWith(":")) {
// Raw JSON data
eventData = line.trim();
}
}
if (!eventData) return null;
try {
data = JSON.parse(eventData);
data._eventType = eventType;
} catch {
// Not JSON, might be raw text
data = { text: eventData, _eventType: eventType };
}
}
// Initialize state if needed
if (!state.responseId) {
state.responseId = `chatcmpl-${Date.now()}`;
state.created = Math.floor(Date.now() / 1000);
state.chunkIndex = 0;
}
const eventType = data._eventType || data.event || "";
// Handle different Kiro event types
if (eventType === "assistantResponseEvent" || data.assistantResponseEvent) {
const content = data.assistantResponseEvent?.content || data.content || "";
if (!content) return null;
const openaiChunk = {
id: state.responseId,
object: "chat.completion.chunk",
created: state.created,
model: state.model || "kiro",
choices: [{
index: 0,
delta: {
...(state.chunkIndex === 0 ? { role: "assistant" } : {}),
content: content
},
finish_reason: null
}]
};
state.chunkIndex++;
return openaiChunk;
}
// Handle reasoning/thinking events
if (eventType === "reasoningContentEvent" || data.reasoningContentEvent) {
const content = data.reasoningContentEvent?.content || data.content || "";
if (!content) return null;
// Convert to thinking block format (Claude-style)
const openaiChunk = {
id: state.responseId,
object: "chat.completion.chunk",
created: state.created,
model: state.model || "kiro",
choices: [{
index: 0,
delta: {
...(state.chunkIndex === 0 ? { role: "assistant" } : {}),
content: `<thinking>${content}</thinking>`
},
finish_reason: null
}]
};
state.chunkIndex++;
return openaiChunk;
}
// Handle tool use events
if (eventType === "toolUseEvent" || data.toolUseEvent) {
const toolUse = data.toolUseEvent || data;
const toolCallId = toolUse.toolUseId || `call_${Date.now()}`;
const toolName = toolUse.name || "";
const toolInput = toolUse.input || {};
const openaiChunk = {
id: state.responseId,
object: "chat.completion.chunk",
created: state.created,
model: state.model || "kiro",
choices: [{
index: 0,
delta: {
...(state.chunkIndex === 0 ? { role: "assistant" } : {}),
tool_calls: [{
index: 0,
id: toolCallId,
type: "function",
function: {
name: toolName,
arguments: JSON.stringify(toolInput)
}
}]
},
finish_reason: null
}]
};
state.chunkIndex++;
return openaiChunk;
}
// Handle completion/done events
if (eventType === "messageStopEvent" || eventType === "done" || data.messageStopEvent) {
const openaiChunk = {
id: state.responseId,
object: "chat.completion.chunk",
created: state.created,
model: state.model || "kiro",
choices: [{
index: 0,
delta: {},
finish_reason: "stop"
}]
};
return openaiChunk;
}
// Handle usage events
if (eventType === "usageEvent" || data.usageEvent) {
const usage = data.usageEvent || data;
state.usage = {
prompt_tokens: usage.inputTokens || 0,
completion_tokens: usage.outputTokens || 0,
total_tokens: (usage.inputTokens || 0) + (usage.outputTokens || 0)
};
return null;
}
// Unknown event type - skip
return null;
}
// Register translator
register(FORMATS.KIRO, FORMATS.OPENAI, null, convertKiroToOpenAI);
export { convertKiroToOpenAI };

View File

@@ -0,0 +1,205 @@
import { register } from "../index.js";
import { FORMATS } from "../formats.js";
// Prefix for Claude OAuth tool names (must match request translator)
const CLAUDE_OAUTH_TOOL_PREFIX = "proxy_";
// Helper: stop thinking block if started
function stopThinkingBlock(state, results) {
if (!state.thinkingBlockStarted) return;
results.push({
type: "content_block_stop",
index: state.thinkingBlockIndex
});
state.thinkingBlockStarted = false;
}
// Helper: stop text block if started
function stopTextBlock(state, results) {
if (!state.textBlockStarted || state.textBlockClosed) return;
state.textBlockClosed = true;
results.push({
type: "content_block_stop",
index: state.textBlockIndex
});
state.textBlockStarted = false;
}
// Convert OpenAI stream chunk to Claude format
function openaiToClaudeResponse(chunk, state) {
if (!chunk || !chunk.choices?.[0]) return null;
const results = [];
const choice = chunk.choices[0];
const delta = choice.delta;
// First chunk - ALWAYS send message_start first
if (!state.messageStartSent) {
state.messageStartSent = true;
state.messageId = chunk.id?.replace("chatcmpl-", "") || `msg_${Date.now()}`;
if (!state.messageId || state.messageId === "chat" || state.messageId.length < 8) {
state.messageId = chunk.extend_fields?.requestId ||
chunk.extend_fields?.traceId ||
`msg_${Date.now()}`;
}
state.model = chunk.model || "unknown";
state.nextBlockIndex = 0;
results.push({
type: "message_start",
message: {
id: state.messageId,
type: "message",
role: "assistant",
model: state.model,
content: [],
stop_reason: null,
stop_sequence: null,
usage: { input_tokens: 0, output_tokens: 0 }
}
});
}
// Handle reasoning_content (thinking) - GLM, DeepSeek, etc.
const reasoningContent = delta?.reasoning_content || delta?.reasoning;
if (reasoningContent) {
stopTextBlock(state, results);
if (!state.thinkingBlockStarted) {
state.thinkingBlockIndex = state.nextBlockIndex++;
state.thinkingBlockStarted = true;
results.push({
type: "content_block_start",
index: state.thinkingBlockIndex,
content_block: { type: "thinking", thinking: "" }
});
}
results.push({
type: "content_block_delta",
index: state.thinkingBlockIndex,
delta: { type: "thinking_delta", thinking: reasoningContent }
});
}
// Handle regular content
if (delta?.content) {
stopThinkingBlock(state, results);
if (!state.textBlockStarted) {
state.textBlockIndex = state.nextBlockIndex++;
state.textBlockStarted = true;
state.textBlockClosed = false;
results.push({
type: "content_block_start",
index: state.textBlockIndex,
content_block: { type: "text", text: "" }
});
}
results.push({
type: "content_block_delta",
index: state.textBlockIndex,
delta: { type: "text_delta", text: delta.content }
});
}
// Tool calls - accumulate arguments instead of emitting immediately
if (delta?.tool_calls) {
for (const tc of delta.tool_calls) {
const idx = tc.index ?? 0;
if (tc.id) {
stopThinkingBlock(state, results);
stopTextBlock(state, results);
const toolBlockIndex = state.nextBlockIndex++;
// Strip prefix from tool name for response
let toolName = tc.function?.name || "";
if (toolName.startsWith(CLAUDE_OAUTH_TOOL_PREFIX)) {
toolName = toolName.slice(CLAUDE_OAUTH_TOOL_PREFIX.length);
}
// Initialize accumulator for this tool
state.toolCalls.set(idx, {
id: tc.id,
name: toolName,
blockIndex: toolBlockIndex,
arguments: "", // Accumulate arguments here
startEmitted: false // Track if content_block_start sent
});
}
// Accumulate arguments instead of emitting immediately
if (tc.function?.arguments) {
const toolInfo = state.toolCalls.get(idx);
if (toolInfo) {
toolInfo.arguments += tc.function.arguments;
}
}
}
}
// Finish - emit all accumulated tools in correct order
if (choice.finish_reason) {
stopThinkingBlock(state, results);
stopTextBlock(state, results);
// STEP 1: Emit all content_block_start for tools (like CLIProxyAPIPlus)
const sortedTools = Array.from(state.toolCalls.entries()).sort((a, b) => a[0] - b[0]);
for (const [, toolInfo] of sortedTools) {
if (!toolInfo.startEmitted) {
results.push({
type: "content_block_start",
index: toolInfo.blockIndex,
content_block: {
type: "tool_use",
id: toolInfo.id,
name: toolInfo.name,
input: {}
}
});
toolInfo.startEmitted = true;
}
}
// STEP 2: Emit input_json_delta + content_block_stop for each tool
for (const [, toolInfo] of sortedTools) {
if (toolInfo.arguments) {
results.push({
type: "content_block_delta",
index: toolInfo.blockIndex,
delta: { type: "input_json_delta", partial_json: toolInfo.arguments }
});
}
results.push({
type: "content_block_stop",
index: toolInfo.blockIndex
});
}
results.push({
type: "message_delta",
delta: { stop_reason: convertFinishReason(choice.finish_reason) },
usage: { output_tokens: 0 }
});
results.push({ type: "message_stop" });
}
return results.length > 0 ? results : null;
}
// Convert OpenAI finish_reason to Claude stop_reason
function convertFinishReason(reason) {
switch (reason) {
case "stop": return "end_turn";
case "length": return "max_tokens";
case "tool_calls": return "tool_use";
default: return "end_turn";
}
}
// Register
register(FORMATS.OPENAI, FORMATS.CLAUDE, null, openaiToClaudeResponse);

View File

@@ -68,22 +68,25 @@ function writeJsonFile(sessionPath, filename, data) {
}
}
// Mask sensitive data in headers
// Mask sensitive data in headers (DISABLED - keep full token for testing)
function maskSensitiveHeaders(headers) {
if (!headers) return {};
const masked = { ...headers };
const sensitiveKeys = ["authorization", "x-api-key", "cookie", "token"];
return { ...headers };
for (const key of Object.keys(masked)) {
const lowerKey = key.toLowerCase();
if (sensitiveKeys.some(sk => lowerKey.includes(sk))) {
const value = masked[key];
if (value && value.length > 20) {
masked[key] = value.slice(0, 10) + "..." + value.slice(-5);
}
}
}
return masked;
// Old masking code (disabled):
// const masked = { ...headers };
// const sensitiveKeys = ["authorization", "x-api-key", "cookie", "token"];
//
// for (const key of Object.keys(masked)) {
// const lowerKey = key.toLowerCase();
// if (sensitiveKeys.some(sk => lowerKey.includes(sk))) {
// const value = masked[key];
// if (value && value.length > 20) {
// masked[key] = value.slice(0, 10) + "..." + value.slice(-5);
// }
// }
// }
// return masked;
}
// No-op logger when logging is disabled
@@ -92,7 +95,6 @@ function createNoOpLogger() {
sessionPath: null,
logClientRawRequest() {},
logRawRequest() {},
logFormatInfo() {},
logConvertedRequest() {},
logRawResponse() {},
logConvertedResponse() {},
@@ -121,9 +123,9 @@ export async function createRequestLogger(sourceFormat, targetFormat, model) {
return {
get sessionPath() { return sessionPath; },
// 0. Log client raw request (before any conversion)
// 1. Log client raw request (before any conversion)
logClientRawRequest(endpoint, body, headers = {}) {
writeJsonFile(sessionPath, "0_client_raw_request.json", {
writeJsonFile(sessionPath, "1_client_raw_request.json", {
timestamp: new Date().toISOString(),
endpoint,
headers: maskSensitiveHeaders(headers),
@@ -131,26 +133,18 @@ export async function createRequestLogger(sourceFormat, targetFormat, model) {
});
},
// 1. Log raw request from client (after initial conversion like responsesApi)
// 2. Log raw request from client (after initial conversion like responsesApi)
logRawRequest(body, headers = {}) {
writeJsonFile(sessionPath, "1_raw_request.json", {
writeJsonFile(sessionPath, "2_raw_request.json", {
timestamp: new Date().toISOString(),
headers: maskSensitiveHeaders(headers),
body
});
},
// 1a. Log format detection info
logFormatInfo(info) {
writeJsonFile(sessionPath, "1a_format_info.json", {
timestamp: new Date().toISOString(),
...info
});
},
// 2. Log converted request to send to provider
// 3. Log converted request to send to provider
logConvertedRequest(url, headers, body) {
writeJsonFile(sessionPath, "2_converted_request.json", {
writeJsonFile(sessionPath, "3_converted_request.json", {
timestamp: new Date().toISOString(),
url,
headers: maskSensitiveHeaders(headers),
@@ -158,9 +152,9 @@ export async function createRequestLogger(sourceFormat, targetFormat, model) {
});
},
// 3. Log provider response (for non-streaming or error)
// 4. Log provider response (for non-streaming or error)
logProviderResponse(status, statusText, headers, body) {
const filename = "3_provider_response.json";
const filename = "4_provider_response.json";
writeJsonFile(sessionPath, filename, {
timestamp: new Date().toISOString(),
status,
@@ -170,39 +164,39 @@ export async function createRequestLogger(sourceFormat, targetFormat, model) {
});
},
// 3. Append streaming chunk to provider response
// 4. Append streaming chunk to provider response
appendProviderChunk(chunk) {
if (!fs || !sessionPath) return;
try {
const filePath = path.join(sessionPath, "3_provider_response.txt");
const filePath = path.join(sessionPath, "4_provider_response.txt");
fs.appendFileSync(filePath, chunk);
} catch (err) {
// Ignore append errors
}
},
// 4. Log converted response to client (for non-streaming)
// 5. Log converted response to client (for non-streaming)
logConvertedResponse(body) {
writeJsonFile(sessionPath, "4_converted_response.json", {
writeJsonFile(sessionPath, "5_converted_response.json", {
timestamp: new Date().toISOString(),
body
});
},
// 4. Append streaming chunk to converted response
// 5. Append streaming chunk to converted response
appendConvertedChunk(chunk) {
if (!fs || !sessionPath) return;
try {
const filePath = path.join(sessionPath, "4_converted_response.txt");
const filePath = path.join(sessionPath, "5_converted_response.txt");
fs.appendFileSync(filePath, chunk);
} catch (err) {
// Ignore append errors
}
},
// 5. Log error
// 6. Log error
logError(error, requestBody = null) {
writeJsonFile(sessionPath, "5_error.json", {
writeJsonFile(sessionPath, "6_error.json", {
timestamp: new Date().toISOString(),
error: error?.message || String(error),
stack: error?.stack,

BIN
public/providers/kiro.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -38,7 +38,8 @@ export async function GET(request, { params }) {
// For providers that don't use PKCE (like GitHub), don't pass codeChallenge
let deviceData;
if (provider === "github") {
if (provider === "github" || provider === "kiro") {
// GitHub and Kiro don't use PKCE for device code
deviceData = await requestDeviceCode(provider);
} else {
// Qwen and other providers use PKCE
@@ -101,16 +102,19 @@ export async function POST(request, { params }) {
}
if (action === "poll") {
const { deviceCode, codeVerifier } = body;
const { deviceCode, codeVerifier, extraData } = body;
if (!deviceCode) {
return NextResponse.json({ error: "Missing device code" }, { status: 400 });
}
// For providers that don't use PKCE (like GitHub), don't pass codeVerifier
// For providers that don't use PKCE (like GitHub, Kiro), don't pass codeVerifier
let result;
if (provider === "github") {
result = await pollForToken(provider, deviceCode);
} else if (provider === "kiro") {
// Kiro needs extraData (clientId, clientSecret) from device code response
result = await pollForToken(provider, deviceCode, null, extraData);
} else {
// Qwen and other providers use PKCE
if (!codeVerifier) {
@@ -143,24 +147,14 @@ export async function POST(request, { params }) {
});
}
// Still pending or error
if (!result.pending) {
// Save error to database for actual errors (not pending)
await createProviderConnection({
provider,
authType: "oauth",
testStatus: "error",
lastError: result.errorDescription,
errorCode: result.error,
lastErrorAt: new Date().toISOString(),
});
}
// Still pending or error - don't create connection for pending states
const isPending = result.pending || result.error === "authorization_pending" || result.error === "slow_down";
return NextResponse.json({
success: false,
error: result.error,
errorDescription: result.errorDescription,
pending: result.pending || result.error === "authorization_pending",
pending: isPending,
});
}

View File

@@ -113,6 +113,24 @@ export const GITHUB_CONFIG = {
editorPluginVersion: "copilot-chat/0.26.7",
};
// Kiro OAuth Configuration (AWS SSO OIDC Device Code Flow)
export const KIRO_CONFIG = {
// AWS SSO OIDC endpoints for Builder ID
ssoOidcEndpoint: "https://oidc.us-east-1.amazonaws.com",
registerClientUrl: "https://oidc.us-east-1.amazonaws.com/client/register",
deviceAuthUrl: "https://oidc.us-east-1.amazonaws.com/device_authorization",
tokenUrl: "https://oidc.us-east-1.amazonaws.com/token",
refreshTokenUrl: "https://prod.us-east-1.auth.desktop.kiro.dev/refreshToken",
// AWS Builder ID start URL
startUrl: "https://view.awsapps.com/start",
// Client registration params
clientName: "kiro-cli",
clientType: "public",
scopes: ["codewhisperer:completions", "codewhisperer:analysis", "codewhisperer:conversations"],
grantTypes: ["urn:ietf:params:oauth:grant-type:device_code", "refresh_token"],
issuerUrl: "https://identitycenter.amazonaws.com/ssoins-722374e8c3c8e6c6",
};
// OAuth timeout (5 minutes)
export const OAUTH_TIMEOUT = 300000;
@@ -126,4 +144,5 @@ export const PROVIDERS = {
ANTIGRAVITY: "antigravity",
OPENAI: "openai",
GITHUB: "github",
KIRO: "kiro",
};

View File

@@ -12,6 +12,7 @@ import {
IFLOW_CONFIG,
ANTIGRAVITY_CONFIG,
GITHUB_CONFIG,
KIRO_CONFIG,
} from "./constants/oauth";
// Provider configurations
@@ -536,6 +537,125 @@ const PROVIDERS = {
},
}),
},
kiro: {
config: KIRO_CONFIG,
flowType: "device_code",
// Kiro uses AWS SSO OIDC - requires client registration first
requestDeviceCode: async (config) => {
// Step 1: Register client with AWS SSO OIDC
const registerRes = await fetch(config.registerClientUrl, {
method: "POST",
headers: {
"Content-Type": "application/json",
Accept: "application/json",
},
body: JSON.stringify({
clientName: config.clientName,
clientType: config.clientType,
scopes: config.scopes,
grantTypes: config.grantTypes,
issuerUrl: config.issuerUrl,
}),
});
if (!registerRes.ok) {
const error = await registerRes.text();
throw new Error(`Client registration failed: ${error}`);
}
const clientInfo = await registerRes.json();
// Step 2: Request device authorization
const deviceRes = await fetch(config.deviceAuthUrl, {
method: "POST",
headers: {
"Content-Type": "application/json",
Accept: "application/json",
},
body: JSON.stringify({
clientId: clientInfo.clientId,
clientSecret: clientInfo.clientSecret,
startUrl: config.startUrl,
}),
});
if (!deviceRes.ok) {
const error = await deviceRes.text();
throw new Error(`Device authorization failed: ${error}`);
}
const deviceData = await deviceRes.json();
// Return combined data for polling
return {
device_code: deviceData.deviceCode,
user_code: deviceData.userCode,
verification_uri: deviceData.verificationUri,
verification_uri_complete: deviceData.verificationUriComplete,
expires_in: deviceData.expiresIn,
interval: deviceData.interval || 5,
// Store client credentials for token exchange
_clientId: clientInfo.clientId,
_clientSecret: clientInfo.clientSecret,
};
},
pollToken: async (config, deviceCode, codeVerifier, extraData) => {
const response = await fetch(config.tokenUrl, {
method: "POST",
headers: {
"Content-Type": "application/json",
Accept: "application/json",
},
body: JSON.stringify({
clientId: extraData?._clientId,
clientSecret: extraData?._clientSecret,
deviceCode: deviceCode,
grantType: "urn:ietf:params:oauth:grant-type:device_code",
}),
});
let data;
try {
data = await response.json();
} catch (e) {
const text = await response.text();
data = { error: "invalid_response", error_description: text };
}
// AWS SSO OIDC returns camelCase
if (data.accessToken) {
return {
ok: true,
data: {
access_token: data.accessToken,
refresh_token: data.refreshToken,
expires_in: data.expiresIn,
// Store client credentials for refresh
_clientId: extraData?._clientId,
_clientSecret: extraData?._clientSecret,
},
};
}
return {
ok: false,
data: {
error: data.error || "authorization_pending",
error_description: data.error_description || data.message,
},
};
},
mapTokens: (tokens) => ({
accessToken: tokens.access_token,
refreshToken: tokens.refresh_token,
expiresIn: tokens.expires_in,
providerSpecificData: {
clientId: tokens._clientId,
clientSecret: tokens._clientSecret,
},
}),
},
};
/**
@@ -614,14 +734,18 @@ export async function requestDeviceCode(providerName, codeChallenge) {
/**
* Poll for token (for device_code flow)
* @param {string} providerName - Provider name
* @param {string} deviceCode - Device code from requestDeviceCode
* @param {string} codeVerifier - PKCE code verifier (optional for some providers)
* @param {object} extraData - Extra data from device code response (e.g. clientId/clientSecret for Kiro)
*/
export async function pollForToken(providerName, deviceCode, codeVerifier) {
export async function pollForToken(providerName, deviceCode, codeVerifier, extraData) {
const provider = getProvider(providerName);
if (provider.flowType !== "device_code") {
throw new Error(`Provider ${providerName} does not support device code flow`);
}
const result = await provider.pollToken(provider.config, deviceCode, codeVerifier);
const result = await provider.pollToken(provider.config, deviceCode, codeVerifier, extraData);
if (result.ok) {
// For device code flows, success is only when we have an access token

View File

@@ -147,8 +147,8 @@ export default function OAuthModal({ isOpen, provider, providerInfo, onSuccess,
try {
setError(null);
// Device code flow (GitHub, Qwen)
if (provider === "github" || provider === "qwen") {
// Device code flow (GitHub, Qwen, Kiro)
if (provider === "github" || provider === "qwen" || provider === "kiro") {
setIsDeviceCode(true);
setStep("waiting");
@@ -162,8 +162,9 @@ export default function OAuthModal({ isOpen, provider, providerInfo, onSuccess,
const verifyUrl = data.verification_uri_complete || data.verification_uri;
if (verifyUrl) window.open(verifyUrl, "_blank");
// Start polling
startPolling(data.device_code, data.codeVerifier, data.interval || 5);
// Start polling - pass extraData for Kiro (contains _clientId, _clientSecret)
const extraData = provider === "kiro" ? { _clientId: data._clientId, _clientSecret: data._clientSecret } : null;
startPolling(data.device_code, data.codeVerifier, data.interval || 5, extraData);
return;
}
@@ -209,7 +210,7 @@ export default function OAuthModal({ isOpen, provider, providerInfo, onSuccess,
};
// Poll for device code token
const startPolling = async (deviceCode, codeVerifier, interval) => {
const startPolling = async (deviceCode, codeVerifier, interval, extraData) => {
setPolling(true);
const maxAttempts = 60;
@@ -220,7 +221,7 @@ export default function OAuthModal({ isOpen, provider, providerInfo, onSuccess,
const res = await fetch(`/api/oauth/${provider}/poll`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ deviceCode, codeVerifier }),
body: JSON.stringify({ deviceCode, codeVerifier, extraData }),
});
const data = await res.json();

View File

@@ -9,6 +9,7 @@ export const OAUTH_PROVIDERS = {
qwen: { id: "qwen", alias: "qw", name: "Qwen Code", icon: "psychology", color: "#10B981" },
"gemini-cli": { id: "gemini-cli", alias: "gc", name: "Gemini CLI", icon: "terminal", color: "#4285F4" },
github: { id: "github", alias: "gh", name: "GitHub Copilot", icon: "code", color: "#333333" },
kiro: { id: "kiro", alias: "kr", name: "Kiro AI", icon: "psychology_alt", color: "#FF6B35" },
};
export const APIKEY_PROVIDERS = {

148
tester/translator/testFromFile.js Executable file
View File

@@ -0,0 +1,148 @@
#!/usr/bin/env node
/**
* Test sending request from converted file directly to provider
* Usage:
* node testFromFile.js <file-path>
* node testFromFile.js data/claude-to-kiro/3_converted_request.json
*/
const fs = require("fs");
const path = require("path");
const args = process.argv.slice(2);
if (args.length === 0 || args[0] === "--help" || args[0] === "-h") {
console.log("");
console.log("🧪 Test From File - Send converted request to provider");
console.log("");
console.log("Usage:");
console.log(" node testFromFile.js <file-path>");
console.log("");
console.log("Examples:");
console.log(" node testFromFile.js data/claude-to-kiro/3_converted_request.json");
console.log(" node testFromFile.js ../logs/openai_codex_xxx/3_converted_request.json");
console.log("");
console.log("File format:");
console.log(" {");
console.log(" \"url\": \"https://api.provider.com/...\",");
console.log(" \"headers\": { ... },");
console.log(" \"body\": { ... }");
console.log(" }");
console.log("");
process.exit(0);
}
const filePath = args[0];
const fullPath = path.isAbsolute(filePath) ? filePath : path.join(process.cwd(), filePath);
if (!fs.existsSync(fullPath)) {
console.error(`❌ File not found: ${fullPath}`);
process.exit(1);
}
// Load request data
let data;
try {
data = JSON.parse(fs.readFileSync(fullPath, "utf8"));
} catch (err) {
console.error(`❌ Failed to parse JSON: ${err.message}`);
process.exit(1);
}
const { url, headers, body } = data;
if (!url || !headers || !body) {
console.error("❌ Invalid file format. Expected: { url, headers, body }");
process.exit(1);
}
// Display request info
console.log("\n🚀 Sending Request from File\n");
console.log(`📁 File: ${filePath}`);
console.log(`🌐 URL: ${url}`);
console.log(`📋 Headers:`);
Object.entries(headers).forEach(([k, v]) => {
if (k.toLowerCase().includes("auth") || k.toLowerCase().includes("key") || k.toLowerCase().includes("bearer")) {
const str = String(v);
if (str.length > 20) {
console.log(` ${k}: ${str.slice(0, 20)}...`);
} else {
console.log(` ${k}: ${str}`);
}
} else {
console.log(` ${k}: ${v}`);
}
});
console.log(`\n📊 Request Body:`);
console.log(` Model: ${body.model || "N/A"}`);
console.log(` Messages: ${body.messages?.length || 0}`);
console.log(` Tools: ${body.tools?.length || 0}`);
console.log(` Stream: ${body.stream || false}`);
// Send request
(async () => {
try {
console.log("\n🚀 Sending request...");
const response = await fetch(url, {
method: "POST",
headers,
body: JSON.stringify(body)
});
console.log(`\n📥 Response: ${response.status} ${response.statusText}`);
if (!response.ok) {
const errorText = await response.text();
console.error(`\n❌ Error response:\n${errorText}`);
process.exit(1);
}
const isStreaming = body.stream || response.headers.get("content-type")?.includes("text/event-stream");
if (isStreaming) {
console.log("\n📡 Streaming response...\n");
const reader = response.body.getReader();
const decoder = new TextDecoder();
let chunkCount = 0;
let buffer = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop(); // Keep incomplete line in buffer
for (const line of lines) {
if (line.trim()) {
process.stdout.write(line + "\n");
chunkCount++;
}
}
}
// Process any remaining data
if (buffer.trim()) {
process.stdout.write(buffer + "\n");
}
console.log(`\n\n✅ Received ${chunkCount} chunks`);
} else {
const responseData = await response.json();
console.log("\n📦 Response:");
console.log(JSON.stringify(responseData, null, 2));
}
} catch (err) {
console.error("\n❌ Request failed:", err.message);
if (process.env.DEBUG) {
console.error(err.stack);
}
process.exit(1);
}
})();