feat: add fast mode toggle for OpenAI models

This commit is contained in:
Peter Steinberger
2026-03-12 23:30:58 +00:00
parent ddcaec89e9
commit d5bffcdeab
66 changed files with 990 additions and 36 deletions

View File

@@ -47,6 +47,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Override per model via `agents.defaults.models["openai/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
- OpenAI Responses WebSocket warm-up defaults to enabled via `params.openaiWsWarmup` (`true`/`false`)
- OpenAI priority processing can be enabled via `agents.defaults.models["openai/<model>"].params.serviceTier`
- OpenAI fast mode can be enabled per model via `agents.defaults.models["<provider>/<model>"].params.fastMode`
```json5
{
@@ -78,6 +79,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
- CLI: `openclaw onboard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai-codex/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
- Shares the same `/fast` toggle and `params.fastMode` config as direct `openai/*`
- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
```json5