docs: clarify adaptive thinking and openai websocket docs

This commit is contained in:
Peter Steinberger
2026-03-02 05:16:09 +00:00
parent e1e715c53d
commit bc0288bcfb
4 changed files with 25 additions and 8 deletions

View File

@@ -35,6 +35,15 @@ openclaw onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
}
```
## Thinking defaults (Claude 4.6)
- Anthropic Claude 4.6 models default to `adaptive` thinking in OpenClaw when no explicit thinking level is set.
- You can override per-message (`/think:<level>`) or in model params:
`agents.defaults.models["anthropic/<model>"].params.thinking`.
- Related Anthropic docs:
- https://platform.claude.com/docs/en/build-with-claude/adaptive-thinking
- https://platform.claude.com/docs/en/build-with-claude/extended-thinking
## Prompt caching (Anthropic API)
OpenClaw supports Anthropic's prompt caching feature. This is **API-only**; subscription auth does not honor cache settings.

View File

@@ -29,7 +29,7 @@ openclaw onboard --openai-api-key "$OPENAI_API_KEY"
```json5
{
env: { OPENAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "openai/gpt-5.1-codex" } } },
agents: { defaults: { model: { primary: "openai/gpt-5.2" } } },
}
```
@@ -71,6 +71,11 @@ You can set `agents.defaults.models.<provider/model>.params.transport`:
For `openai/*` (Responses API), OpenClaw also enables WebSocket warm-up by
default (`openaiWsWarmup: true`) when WebSocket transport is used.
Related OpenAI docs:
- Realtime API with WebSocket: https://platform.openai.com/docs/guides/realtime-websocket
- Streaming API responses (SSE): https://platform.openai.com/docs/guides/streaming-responses
```json5
{
agents: {
@@ -100,7 +105,7 @@ OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
agents: {
defaults: {
models: {
"openai/gpt-5": {
"openai/gpt-5.2": {
params: {
openaiWsWarmup: false,
},
@@ -118,7 +123,7 @@ OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
agents: {
defaults: {
models: {
"openai/gpt-5": {
"openai/gpt-5.2": {
params: {
openaiWsWarmup: true,
},
@@ -151,7 +156,7 @@ Responses models (for example Azure OpenAI Responses):
agents: {
defaults: {
models: {
"azure-openai-responses/gpt-4o": {
"azure-openai-responses/gpt-5.2": {
params: {
responsesServerCompaction: true,
},
@@ -169,7 +174,7 @@ Responses models (for example Azure OpenAI Responses):
agents: {
defaults: {
models: {
"openai/gpt-5": {
"openai/gpt-5.2": {
params: {
responsesServerCompaction: true,
responsesCompactThreshold: 120000,
@@ -188,7 +193,7 @@ Responses models (for example Azure OpenAI Responses):
agents: {
defaults: {
models: {
"openai/gpt-5": {
"openai/gpt-5.2": {
params: {
responsesServerCompaction: false,
},