chore: apply local workspace updates (#9911)

* chore: apply local workspace updates

* fix: resolve prep findings after rebase (#9898) (thanks @gumadeiras)

* refactor: centralize model allowlist normalization (#9898) (thanks @gumadeiras)

* fix: guard model allowlist initialization (#9911)

* docs: update changelog scope for #9911

* docs: remove model names from changelog entry (#9911)

* fix: satisfy type-aware lint in model allowlist (#9911)
This commit is contained in:
Gustavo Madeira Santana
2026-02-05 16:54:44 -05:00
committed by GitHub
parent 93b450349f
commit 4629054403
72 changed files with 722 additions and 251 deletions

View File

@@ -78,8 +78,8 @@ export AWS_BEARER_TOKEN_BEDROCK="..."
auth: "aws-sdk",
models: [
{
id: "anthropic.claude-opus-4-5-20251101-v1:0",
name: "Claude Opus 4.5 (Bedrock)",
id: "us.anthropic.claude-opus-4-6-v1:0",
name: "Claude Opus 4.6 (Bedrock)",
reasoning: true,
input: ["text", "image"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
@@ -92,7 +92,7 @@ export AWS_BEARER_TOKEN_BEDROCK="..."
},
agents: {
defaults: {
model: { primary: "amazon-bedrock/anthropic.claude-opus-4-5-20251101-v1:0" },
model: { primary: "amazon-bedrock/us.anthropic.claude-opus-4-6-v1:0" },
},
},
}

View File

@@ -13,7 +13,7 @@ For model selection rules, see [/concepts/models](/concepts/models).
## Quick rules
- Model refs use `provider/model` (example: `opencode/claude-opus-4-5`).
- Model refs use `provider/model` (example: `opencode/claude-opus-4-6`).
- If you set `agents.defaults.models`, it becomes the allowlist.
- CLI helpers: `openclaw onboard`, `openclaw models list`, `openclaw models set <provider/model>`.
@@ -26,12 +26,12 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `openai`
- Auth: `OPENAI_API_KEY`
- Example model: `openai/gpt-5.2`
- Example model: `openai/gpt-5.1-codex`
- CLI: `openclaw onboard --auth-choice openai-api-key`
```json5
{
agents: { defaults: { model: { primary: "openai/gpt-5.2" } } },
agents: { defaults: { model: { primary: "openai/gpt-5.1-codex" } } },
}
```
@@ -39,12 +39,12 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `anthropic`
- Auth: `ANTHROPIC_API_KEY` or `claude setup-token`
- Example model: `anthropic/claude-opus-4-5`
- Example model: `anthropic/claude-opus-4-6`
- CLI: `openclaw onboard --auth-choice token` (paste setup-token) or `openclaw models auth paste-token --provider anthropic`
```json5
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
```
@@ -52,12 +52,12 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `openai-codex`
- Auth: OAuth (ChatGPT)
- Example model: `openai-codex/gpt-5.2`
- Example model: `openai-codex/gpt-5.3-codex`
- CLI: `openclaw onboard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
```json5
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.2" } } },
agents: { defaults: { model: { primary: "openai-codex/gpt-5.3-codex" } } },
}
```
@@ -65,12 +65,12 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `opencode`
- Auth: `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY`)
- Example model: `opencode/claude-opus-4-5`
- Example model: `opencode/claude-opus-4-6`
- CLI: `openclaw onboard --auth-choice opencode-zen`
```json5
{
agents: { defaults: { model: { primary: "opencode/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "opencode/claude-opus-4-6" } } },
}
```
@@ -106,7 +106,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `vercel-ai-gateway`
- Auth: `AI_GATEWAY_API_KEY`
- Example model: `vercel-ai-gateway/anthropic/claude-opus-4.5`
- Example model: `vercel-ai-gateway/anthropic/claude-opus-4.6`
- CLI: `openclaw onboard --auth-choice ai-gateway-api-key`
### Other built-in providers
@@ -309,7 +309,7 @@ Notes:
```bash
openclaw onboard --auth-choice opencode-zen
openclaw models set opencode/claude-opus-4-5
openclaw models set opencode/claude-opus-4-6
openclaw models list
```

View File

@@ -83,7 +83,7 @@ Example allowlist config:
model: { primary: "anthropic/claude-sonnet-4-5" },
models: {
"anthropic/claude-sonnet-4-5": { alias: "Sonnet" },
"anthropic/claude-opus-4-5": { alias: "Opus" },
"anthropic/claude-opus-4-6": { alias: "Opus" },
},
},
}

View File

@@ -221,7 +221,7 @@ Split by channel: route WhatsApp to a fast everyday agent and Telegram to an Opu
id: "opus",
name: "Deep Work",
workspace: "~/.openclaw/workspace-opus",
model: "anthropic/claude-opus-4-5",
model: "anthropic/claude-opus-4-6",
},
],
},
@@ -255,7 +255,7 @@ Keep WhatsApp on the fast agent, but route one DM to Opus:
id: "opus",
name: "Deep Work",
workspace: "~/.openclaw/workspace-opus",
model: "anthropic/claude-opus-4-5",
model: "anthropic/claude-opus-4-6",
},
],
},

View File

@@ -25,13 +25,13 @@ want “always works” text responses without relying on external APIs.
You can use Claude Code CLI **without any config** (OpenClaw ships a built-in default):
```bash
openclaw agent --message "hi" --model claude-cli/opus-4.5
openclaw agent --message "hi" --model claude-cli/opus-4.6
```
Codex CLI also works out of the box:
```bash
openclaw agent --message "hi" --model codex-cli/gpt-5.2-codex
openclaw agent --message "hi" --model codex-cli/gpt-5.3-codex
```
If your gateway runs under launchd/systemd and PATH is minimal, add just the
@@ -62,11 +62,12 @@ Add a CLI backend to your fallback list so it only runs when primary models fail
agents: {
defaults: {
model: {
primary: "anthropic/claude-opus-4-5",
fallbacks: ["claude-cli/opus-4.5"],
primary: "anthropic/claude-opus-4-6",
fallbacks: ["claude-cli/opus-4.6", "claude-cli/opus-4.5"],
},
models: {
"anthropic/claude-opus-4-5": { alias: "Opus" },
"anthropic/claude-opus-4-6": { alias: "Opus" },
"claude-cli/opus-4.6": {},
"claude-cli/opus-4.5": {},
},
},
@@ -112,6 +113,7 @@ The provider id becomes the left side of your model ref:
input: "arg",
modelArg: "--model",
modelAliases: {
"claude-opus-4-6": "opus",
"claude-opus-4-5": "opus",
"claude-sonnet-4-5": "sonnet",
},

View File

@@ -226,13 +226,13 @@ Save to `~/.openclaw/openclaw.json` and you can DM the bot from that number.
userTimezone: "America/Chicago",
model: {
primary: "anthropic/claude-sonnet-4-5",
fallbacks: ["anthropic/claude-opus-4-5", "openai/gpt-5.2"],
fallbacks: ["anthropic/claude-opus-4-6", "openai/gpt-5.2"],
},
imageModel: {
primary: "openrouter/anthropic/claude-sonnet-4-5",
},
models: {
"anthropic/claude-opus-4-5": { alias: "opus" },
"anthropic/claude-opus-4-6": { alias: "opus" },
"anthropic/claude-sonnet-4-5": { alias: "sonnet" },
"openai/gpt-5.2": { alias: "gpt" },
},
@@ -496,7 +496,7 @@ If more than one person can DM your bot (multiple entries in `allowFrom`, pairin
workspace: "~/.openclaw/workspace",
model: {
primary: "anthropic/claude-sonnet-4-5",
fallbacks: ["anthropic/claude-opus-4-5"],
fallbacks: ["anthropic/claude-opus-4-6"],
},
},
}
@@ -534,7 +534,7 @@ If more than one person can DM your bot (multiple entries in `allowFrom`, pairin
agent: {
workspace: "~/.openclaw/workspace",
model: {
primary: "anthropic/claude-opus-4-5",
primary: "anthropic/claude-opus-4-6",
fallbacks: ["minimax/MiniMax-M2.1"],
},
},

View File

@@ -1547,8 +1547,8 @@ The `responsePrefix` string can include template variables that resolve dynamica
| Variable | Description | Example |
| ----------------- | ---------------------- | --------------------------- |
| `{model}` | Short model name | `claude-opus-4-5`, `gpt-4o` |
| `{modelFull}` | Full model identifier | `anthropic/claude-opus-4-5` |
| `{model}` | Short model name | `claude-opus-4-6`, `gpt-4o` |
| `{modelFull}` | Full model identifier | `anthropic/claude-opus-4-6` |
| `{provider}` | Provider name | `anthropic`, `openai` |
| `{thinkingLevel}` | Current thinking level | `high`, `low`, `off` |
| `{identity.name}` | Agent identity name | (same as `"auto"` mode) |
@@ -1564,7 +1564,7 @@ Unresolved variables remain as literal text.
}
```
Example output: `[claude-opus-4-5 | think:high] Here's my response...`
Example output: `[claude-opus-4-6 | think:high] Here's my response...`
WhatsApp inbound prefix is configured via `channels.whatsapp.messagePrefix` (deprecated:
`messages.messagePrefix`). Default stays **unchanged**: `"[openclaw]"` when
@@ -1710,7 +1710,7 @@ Z.AI GLM-4.x models automatically enable thinking mode unless you:
OpenClaw also ships a few built-in alias shorthands. Defaults only apply when the model
is already present in `agents.defaults.models`:
- `opus` -> `anthropic/claude-opus-4-5`
- `opus` -> `anthropic/claude-opus-4-6`
- `sonnet` -> `anthropic/claude-sonnet-4-5`
- `gpt` -> `openai/gpt-5.2`
- `gpt-mini` -> `openai/gpt-5-mini`
@@ -1719,18 +1719,18 @@ is already present in `agents.defaults.models`:
If you configure the same alias name (case-insensitive) yourself, your value wins (defaults never override).
Example: Opus 4.5 primary with MiniMax M2.1 fallback (hosted MiniMax):
Example: Opus 4.6 primary with MiniMax M2.1 fallback (hosted MiniMax):
```json5
{
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-5": { alias: "opus" },
"anthropic/claude-opus-4-6": { alias: "opus" },
"minimax/MiniMax-M2.1": { alias: "minimax" },
},
model: {
primary: "anthropic/claude-opus-4-5",
primary: "anthropic/claude-opus-4-6",
fallbacks: ["minimax/MiniMax-M2.1"],
},
},
@@ -1786,7 +1786,7 @@ Example:
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-5": { alias: "Opus" },
"anthropic/claude-opus-4-6": { alias: "Opus" },
"anthropic/claude-sonnet-4-1": { alias: "Sonnet" },
"openrouter/deepseek/deepseek-r1:free": {},
"zai/glm-4.7": {
@@ -1800,7 +1800,7 @@ Example:
},
},
model: {
primary: "anthropic/claude-opus-4-5",
primary: "anthropic/claude-opus-4-6",
fallbacks: [
"openrouter/deepseek/deepseek-r1:free",
"openrouter/meta-llama/llama-3.3-70b-instruct:free",
@@ -2011,7 +2011,7 @@ Typing indicators:
- `session.typingIntervalSeconds`: per-session override for the refresh interval.
See [/concepts/typing-indicators](/concepts/typing-indicators) for behavior details.
`agents.defaults.model.primary` should be set as `provider/model` (e.g. `anthropic/claude-opus-4-5`).
`agents.defaults.model.primary` should be set as `provider/model` (e.g. `anthropic/claude-opus-4-6`).
Aliases come from `agents.defaults.models.*.alias` (e.g. `Opus`).
If you omit the provider, OpenClaw currently assumes `anthropic` as a temporary
deprecation fallback.
@@ -2485,7 +2485,7 @@ the built-in `opencode` provider from pi-ai; set `OPENCODE_API_KEY` (or
Notes:
- Model refs use `opencode/<modelId>` (example: `opencode/claude-opus-4-5`).
- Model refs use `opencode/<modelId>` (example: `opencode/claude-opus-4-6`).
- If you enable an allowlist via `agents.defaults.models`, add each model you plan to use.
- Shortcut: `openclaw onboard --auth-choice opencode-zen`.
@@ -2493,8 +2493,8 @@ Notes:
{
agents: {
defaults: {
model: { primary: "opencode/claude-opus-4-5" },
models: { "opencode/claude-opus-4-5": { alias: "Opus" } },
model: { primary: "opencode/claude-opus-4-6" },
models: { "opencode/claude-opus-4-6": { alias: "Opus" } },
},
},
}
@@ -2652,7 +2652,7 @@ Use MiniMax M2.1 directly without LM Studio:
agent: {
model: { primary: "minimax/MiniMax-M2.1" },
models: {
"anthropic/claude-opus-4-5": { alias: "Opus" },
"anthropic/claude-opus-4-6": { alias: "Opus" },
"minimax/MiniMax-M2.1": { alias: "Minimax" },
},
},

View File

@@ -83,7 +83,7 @@ and logged; a message that is only `HEARTBEAT_OK` is dropped.
defaults: {
heartbeat: {
every: "30m", // default: 30m (0m disables)
model: "anthropic/claude-opus-4-5",
model: "anthropic/claude-opus-4-6",
includeReasoning: false, // default: false (deliver separate Reasoning: message when available)
target: "last", // last | none | <channel id> (core or plugin, e.g. "bluebubbles")
to: "+15551234567", // optional channel-specific override

View File

@@ -21,7 +21,7 @@ Best current local stack. Load MiniMax M2.1 in LM Studio, enable the local serve
defaults: {
model: { primary: "lmstudio/minimax-m2.1-gs32" },
models: {
"anthropic/claude-opus-4-5": { alias: "Opus" },
"anthropic/claude-opus-4-6": { alias: "Opus" },
"lmstudio/minimax-m2.1-gs32": { alias: "Minimax" },
},
},
@@ -68,12 +68,12 @@ Keep hosted models configured even when running local; use `models.mode: "merge"
defaults: {
model: {
primary: "anthropic/claude-sonnet-4-5",
fallbacks: ["lmstudio/minimax-m2.1-gs32", "anthropic/claude-opus-4-5"],
fallbacks: ["lmstudio/minimax-m2.1-gs32", "anthropic/claude-opus-4-6"],
},
models: {
"anthropic/claude-sonnet-4-5": { alias: "Sonnet" },
"lmstudio/minimax-m2.1-gs32": { alias: "MiniMax Local" },
"anthropic/claude-opus-4-5": { alias: "Opus" },
"anthropic/claude-opus-4-6": { alias: "Opus" },
},
},
},

View File

@@ -243,7 +243,7 @@ Even with strong system prompts, **prompt injection is not solved**. System prom
- Run sensitive tool execution in a sandbox; keep secrets out of the agents reachable filesystem.
- Note: sandboxing is opt-in. If sandbox mode is off, exec runs on the gateway host even though tools.exec.host defaults to sandbox, and host exec does not require approvals unless you set host=gateway and configure exec approvals.
- Limit high-risk tools (`exec`, `browser`, `web_fetch`, `web_search`) to trusted agents or explicit allowlists.
- **Model choice matters:** older/legacy models can be less robust against prompt injection and tool misuse. Prefer modern, instruction-hardened models for any bot with tools. We recommend Anthropic Opus 4.5 because its quite good at recognizing prompt injections (see [“A step forward on safety”](https://www.anthropic.com/news/claude-opus-4-5)).
- **Model choice matters:** older/legacy models can be less robust against prompt injection and tool misuse. Prefer modern, instruction-hardened models for any bot with tools. We recommend Anthropic Opus 4.6 (or the latest Opus) because its strong at recognizing prompt injections (see [“A step forward on safety”](https://www.anthropic.com/news/claude-opus-4-5)).
Red flags to treat as untrusted:

View File

@@ -707,7 +707,7 @@ Yes - via pi-ai's **Amazon Bedrock (Converse)** provider with **manual config**.
### How does Codex auth work
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). The wizard can run the OAuth flow and will set the default model to `openai-codex/gpt-5.2` when appropriate. See [Model providers](/concepts/model-providers) and [Wizard](/start/wizard).
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). The wizard can run the OAuth flow and will set the default model to `openai-codex/gpt-5.3-codex` when appropriate. See [Model providers](/concepts/model-providers) and [Wizard](/start/wizard).
### Do you support OpenAI subscription auth Codex OAuth
@@ -1936,11 +1936,11 @@ OpenClaw's default model is whatever you set as:
agents.defaults.model.primary
```
Models are referenced as `provider/model` (example: `anthropic/claude-opus-4-5`). If you omit the provider, OpenClaw currently assumes `anthropic` as a temporary deprecation fallback - but you should still **explicitly** set `provider/model`.
Models are referenced as `provider/model` (example: `anthropic/claude-opus-4-6`). If you omit the provider, OpenClaw currently assumes `anthropic` as a temporary deprecation fallback - but you should still **explicitly** set `provider/model`.
### What model do you recommend
**Recommended default:** `anthropic/claude-opus-4-5`.
**Recommended default:** `anthropic/claude-opus-4-6`.
**Good alternative:** `anthropic/claude-sonnet-4-5`.
**Reliable (less character):** `openai/gpt-5.2` - nearly as good as Opus, just less personality.
**Budget:** `zai/glm-4.7`.
@@ -1989,7 +1989,7 @@ Docs: [Models](/concepts/models), [Configure](/cli/configure), [Config](/cli/con
### What do OpenClaw, Flawd, and Krill use for models
- **OpenClaw + Flawd:** Anthropic Opus (`anthropic/claude-opus-4-5`) - see [Anthropic](/providers/anthropic).
- **OpenClaw + Flawd:** Anthropic Opus (`anthropic/claude-opus-4-6`) - see [Anthropic](/providers/anthropic).
- **Krill:** MiniMax M2.1 (`minimax/MiniMax-M2.1`) - see [MiniMax](/providers/minimax).
### How do I switch models on the fly without restarting
@@ -2029,7 +2029,7 @@ It also shows the configured provider endpoint (`baseUrl`) and API mode (`api`)
Re-run `/model` **without** the `@profile` suffix:
```
/model anthropic/claude-opus-4-5
/model anthropic/claude-opus-4-6
```
If you want to return to the default, pick it from `/model` (or send `/model <default provider/model>`).
@@ -2039,8 +2039,8 @@ Use `/model status` to confirm which auth profile is active.
Yes. Set one as default and switch as needed:
- **Quick switch (per session):** `/model gpt-5.2` for daily tasks, `/model gpt-5.2-codex` for coding.
- **Default + switch:** set `agents.defaults.model.primary` to `openai-codex/gpt-5.2`, then switch to `openai-codex/gpt-5.2-codex` when coding (or the other way around).
- **Quick switch (per session):** `/model gpt-5.2` for daily tasks, `/model gpt-5.3-codex` for coding.
- **Default + switch:** set `agents.defaults.model.primary` to `openai-codex/gpt-5.3-codex`, then switch to `openai-codex/gpt-5.3-codex-codex` when coding (or the other way around).
- **Sub-agents:** route coding tasks to sub-agents with a different default model.
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).
@@ -2118,7 +2118,7 @@ Docs: [Models](/concepts/models), [Multi-Agent Routing](/concepts/multi-agent),
Yes. OpenClaw ships a few default shorthands (only applied when the model exists in `agents.defaults.models`):
- `opus` → `anthropic/claude-opus-4-5`
- `opus` → `anthropic/claude-opus-4-6`
- `sonnet` → `anthropic/claude-sonnet-4-5`
- `gpt` → `openai/gpt-5.2`
- `gpt-mini` → `openai/gpt-5-mini`
@@ -2135,9 +2135,9 @@ Aliases come from `agents.defaults.models.<modelId>.alias`. Example:
{
agents: {
defaults: {
model: { primary: "anthropic/claude-opus-4-5" },
model: { primary: "anthropic/claude-opus-4-6" },
models: {
"anthropic/claude-opus-4-5": { alias: "opus" },
"anthropic/claude-opus-4-6": { alias: "opus" },
"anthropic/claude-sonnet-4-5": { alias: "sonnet" },
"anthropic/claude-haiku-4-5": { alias: "haiku" },
},
@@ -2823,7 +2823,7 @@ You can add options like `debounce:2s cap:25 drop:summarize` for followup modes.
**Q: "What's the default model for Anthropic with an API key?"**
**A:** In OpenClaw, credentials and model selection are separate. Setting `ANTHROPIC_API_KEY` (or storing an Anthropic API key in auth profiles) enables authentication, but the actual default model is whatever you configure in `agents.defaults.model.primary` (for example, `anthropic/claude-sonnet-4-5` or `anthropic/claude-opus-4-5`). If you see `No credentials found for profile "anthropic:default"`, it means the Gateway couldn't find Anthropic credentials in the expected `auth-profiles.json` for the agent that's running.
**A:** In OpenClaw, credentials and model selection are separate. Setting `ANTHROPIC_API_KEY` (or storing an Anthropic API key in auth profiles) enables authentication, but the actual default model is whatever you configure in `agents.defaults.model.primary` (for example, `anthropic/claude-sonnet-4-5` or `anthropic/claude-opus-4-6`). If you see `No credentials found for profile "anthropic:default"`, it means the Gateway couldn't find Anthropic credentials in the expected `auth-profiles.json` for the agent that's running.
---

View File

@@ -186,7 +186,7 @@ If you omit `capabilities`, the entry is eligible for the list it appears in.
**Image**
- Prefer your active model if it supports images.
- Good defaults: `openai/gpt-5.2`, `anthropic/claude-opus-4-5`, `google/gemini-3-pro-preview`.
- Good defaults: `openai/gpt-5.2`, `anthropic/claude-opus-4-6`, `google/gemini-3-pro-preview`.
**Audio**
@@ -300,7 +300,7 @@ When `mode: "all"`, outputs are labeled `[Image 1/2]`, `[Audio 2/2]`, etc.
maxChars: 500,
models: [
{ provider: "openai", model: "gpt-5.2" },
{ provider: "anthropic", model: "claude-opus-4-5" },
{ provider: "anthropic", model: "claude-opus-4-6" },
{
type: "cli",
command: "gemini",

View File

@@ -148,7 +148,7 @@ cat > /data/openclaw.json << 'EOF'
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-opus-4-5",
"primary": "anthropic/claude-opus-4-6",
"fallbacks": ["anthropic/claude-sonnet-4-5", "openai/gpt-4o"]
},
"maxConcurrent": 4

View File

@@ -31,7 +31,7 @@ openclaw onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
```json5
{
env: { ANTHROPIC_API_KEY: "sk-ant-..." },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
```
@@ -54,7 +54,7 @@ Use the `cacheRetention` parameter in your model config:
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-5": {
"anthropic/claude-opus-4-6": {
params: { cacheRetention: "long" },
},
},
@@ -114,7 +114,7 @@ openclaw onboard --auth-choice setup-token
```json5
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
```

View File

@@ -29,7 +29,7 @@ See [Venice AI](/providers/venice).
```json5
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
```

View File

@@ -96,7 +96,7 @@ Configure via CLI:
### MiniMax M2.1 as fallback (Opus primary)
**Best for:** keep Opus 4.5 as primary, fail over to MiniMax M2.1.
**Best for:** keep Opus 4.6 as primary, fail over to MiniMax M2.1.
```json5
{
@@ -104,11 +104,11 @@ Configure via CLI:
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-5": { alias: "opus" },
"anthropic/claude-opus-4-6": { alias: "opus" },
"minimax/MiniMax-M2.1": { alias: "minimax" },
},
model: {
primary: "anthropic/claude-opus-4-5",
primary: "anthropic/claude-opus-4-6",
fallbacks: ["minimax/MiniMax-M2.1"],
},
},

View File

@@ -27,7 +27,7 @@ See [Venice AI](/providers/venice).
```json5
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
```

View File

@@ -29,7 +29,7 @@ openclaw onboard --openai-api-key "$OPENAI_API_KEY"
```json5
{
env: { OPENAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "openai/gpt-5.2" } } },
agents: { defaults: { model: { primary: "openai/gpt-5.1-codex" } } },
}
```
@@ -52,7 +52,7 @@ openclaw models auth login --provider openai-codex
```json5
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.2" } } },
agents: { defaults: { model: { primary: "openai-codex/gpt-5.3-codex" } } },
}
```

View File

@@ -25,7 +25,7 @@ openclaw onboard --opencode-zen-api-key "$OPENCODE_API_KEY"
```json5
{
env: { OPENCODE_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "opencode/claude-opus-4-5" } } },
agents: { defaults: { model: { primary: "opencode/claude-opus-4-6" } } },
}
```

View File

@@ -28,7 +28,7 @@ openclaw onboard --auth-choice ai-gateway-api-key
{
agents: {
defaults: {
model: { primary: "vercel-ai-gateway/anthropic/claude-opus-4.5" },
model: { primary: "vercel-ai-gateway/anthropic/claude-opus-4.6" },
},
},
}

View File

@@ -142,7 +142,7 @@ Example:
{
logging: { level: "info" },
agent: {
model: "anthropic/claude-opus-4-5",
model: "anthropic/claude-opus-4-6",
workspace: "~/.openclaw/workspace",
thinkingDefault: "high",
timeoutSeconds: 1800,

View File

@@ -135,12 +135,15 @@ What you set:
<Accordion title="OpenAI Code subscription (OAuth)">
Browser flow; paste `code#state`.
Sets `agents.defaults.model` to `openai-codex/gpt-5.2` when model is unset or `openai/*`.
Sets `agents.defaults.model` to `openai-codex/gpt-5.3-codex` when model is unset or `openai/*`.
</Accordion>
<Accordion title="OpenAI API key">
Uses `OPENAI_API_KEY` if present or prompts for a key, then saves it to
`~/.openclaw/.env` so launchd can read it.
Sets `agents.defaults.model` to `openai/gpt-5.1-codex` when model is unset, `openai/*`, or `openai-codex/*`.
</Accordion>
<Accordion title="OpenCode Zen">
Prompts for `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY`).

View File

@@ -110,7 +110,7 @@ Live tests are split into two layers so we can isolate failures:
- How to select models:
- `OPENCLAW_LIVE_MODELS=modern` to run the modern allowlist (Opus/Sonnet/Haiku 4.5, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.1, Grok 4)
- `OPENCLAW_LIVE_MODELS=all` is an alias for the modern allowlist
- or `OPENCLAW_LIVE_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-5,..."` (comma allowlist)
- or `OPENCLAW_LIVE_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-6,..."` (comma allowlist)
- How to select providers:
- `OPENCLAW_LIVE_PROVIDERS="google,google-antigravity,google-gemini-cli"` (comma allowlist)
- Where keys come from:
@@ -172,7 +172,7 @@ openclaw models list --json
- Profile: `OPENCLAW_LIVE_SETUP_TOKEN_PROFILE=anthropic:setup-token-test`
- Raw token: `OPENCLAW_LIVE_SETUP_TOKEN_VALUE=sk-ant-oat01-...`
- Model override (optional):
- `OPENCLAW_LIVE_SETUP_TOKEN_MODEL=anthropic/claude-opus-4-5`
- `OPENCLAW_LIVE_SETUP_TOKEN_MODEL=anthropic/claude-opus-4-6`
Setup example:
@@ -193,8 +193,8 @@ OPENCLAW_LIVE_SETUP_TOKEN=1 OPENCLAW_LIVE_SETUP_TOKEN_PROFILE=anthropic:setup-to
- Command: `claude`
- Args: `["-p","--output-format","json","--dangerously-skip-permissions"]`
- Overrides (optional):
- `OPENCLAW_LIVE_CLI_BACKEND_MODEL="claude-cli/claude-opus-4-5"`
- `OPENCLAW_LIVE_CLI_BACKEND_MODEL="codex-cli/gpt-5.2-codex"`
- `OPENCLAW_LIVE_CLI_BACKEND_MODEL="claude-cli/claude-opus-4-6"`
- `OPENCLAW_LIVE_CLI_BACKEND_MODEL="codex-cli/gpt-5.3-codex"`
- `OPENCLAW_LIVE_CLI_BACKEND_COMMAND="/full/path/to/claude"`
- `OPENCLAW_LIVE_CLI_BACKEND_ARGS='["-p","--output-format","json","--permission-mode","bypassPermissions"]'`
- `OPENCLAW_LIVE_CLI_BACKEND_CLEAR_ENV='["ANTHROPIC_API_KEY","ANTHROPIC_API_KEY_OLD"]'`
@@ -223,7 +223,7 @@ Narrow, explicit allowlists are fastest and least flaky:
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- Tool calling across several providers:
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-5,google/gemini-3-flash-preview,zai/glm-4.7,minimax/minimax-m2.1" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/minimax-m2.1" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- Google focus (Gemini API key + Antigravity):
- Gemini (API key): `OPENCLAW_LIVE_GATEWAY_MODELS="google/gemini-3-flash-preview" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
@@ -247,22 +247,22 @@ There is no fixed “CI model list” (live is opt-in), but these are the **reco
This is the “common models” run we expect to keep working:
- OpenAI (non-Codex): `openai/gpt-5.2` (optional: `openai/gpt-5.1`)
- OpenAI Codex: `openai-codex/gpt-5.2` (optional: `openai-codex/gpt-5.2-codex`)
- Anthropic: `anthropic/claude-opus-4-5` (or `anthropic/claude-sonnet-4-5`)
- OpenAI Codex: `openai-codex/gpt-5.3-codex` (optional: `openai-codex/gpt-5.3-codex-codex`)
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-5`)
- Google (Gemini API): `google/gemini-3-pro-preview` and `google/gemini-3-flash-preview` (avoid older Gemini 2.x models)
- Google (Antigravity): `google-antigravity/claude-opus-4-5-thinking` and `google-antigravity/gemini-3-flash`
- Z.AI (GLM): `zai/glm-4.7`
- MiniMax: `minimax/minimax-m2.1`
Run gateway smoke with tools + image:
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.2,anthropic/claude-opus-4-5,google/gemini-3-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-5-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/minimax-m2.1" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.3-codex,anthropic/claude-opus-4-6,google/gemini-3-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-5-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/minimax-m2.1" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
### Baseline: tool calling (Read + optional Exec)
Pick at least one per provider family:
- OpenAI: `openai/gpt-5.2` (or `openai/gpt-5-mini`)
- Anthropic: `anthropic/claude-opus-4-5` (or `anthropic/claude-sonnet-4-5`)
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-5`)
- Google: `google/gemini-3-flash-preview` (or `google/gemini-3-pro-preview`)
- Z.AI (GLM): `zai/glm-4.7`
- MiniMax: `minimax/minimax-m2.1`

View File

@@ -93,9 +93,9 @@ https://docs.anthropic.com/docs/build-with-claude/prompt-caching
agents:
defaults:
model:
primary: "anthropic/claude-opus-4-5"
primary: "anthropic/claude-opus-4-6"
models:
"anthropic/claude-opus-4-5":
"anthropic/claude-opus-4-6":
params:
cacheRetention: "long"
heartbeat:

View File

@@ -55,7 +55,7 @@ without writing custom OpenClaw code for each workflow.
"defaultProvider": "openai-codex",
"defaultModel": "gpt-5.2",
"defaultAuthProfileId": "main",
"allowedModels": ["openai-codex/gpt-5.2"],
"allowedModels": ["openai-codex/gpt-5.3-codex"],
"maxTokens": 800,
"timeoutMs": 30000
}