mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-30 04:36:04 +00:00
feat(openai): add gpt-5.4 support for API and Codex OAuth (#36590)
* feat(openai): add gpt-5.4 support and priority processing * feat(openai-codex): add gpt-5.4 oauth support * fix(openai): preserve provider overrides in gpt-5.4 fallback * fix(openai-codex): keep xhigh for gpt-5.4 default * fix(models): preserve configured overrides in list output * fix(models): close gpt-5.4 integration gaps * fix(openai): scope service tier to public api * fix(openai): complete prep followups for gpt-5.4 support (#36590) (thanks @dorukardahan) --------- Co-authored-by: Tyler Yust <TYTYYUST@YAHOO.COM>
This commit is contained in:
@@ -30,10 +30,13 @@ openclaw onboard --openai-api-key "$OPENAI_API_KEY"
|
||||
```json5
|
||||
{
|
||||
env: { OPENAI_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.2" } } },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
|
||||
}
|
||||
```
|
||||
|
||||
OpenAI's current API model docs list `gpt-5.4` and `gpt-5.4-pro` for direct
|
||||
OpenAI API usage. OpenClaw forwards both through the `openai/*` Responses path.
|
||||
|
||||
## Option B: OpenAI Code (Codex) subscription
|
||||
|
||||
**Best for:** using ChatGPT/Codex subscription access instead of an API key.
|
||||
@@ -53,10 +56,13 @@ openclaw models auth login --provider openai-codex
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "openai-codex/gpt-5.3-codex" } } },
|
||||
agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
|
||||
}
|
||||
```
|
||||
|
||||
OpenAI's current Codex docs list `gpt-5.4` as the current Codex model. OpenClaw
|
||||
maps that to `openai-codex/gpt-5.4` for ChatGPT/Codex OAuth usage.
|
||||
|
||||
### Transport default
|
||||
|
||||
OpenClaw uses `pi-ai` for model streaming. For both `openai/*` and
|
||||
@@ -81,9 +87,9 @@ Related OpenAI docs:
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "openai-codex/gpt-5.3-codex" },
|
||||
model: { primary: "openai-codex/gpt-5.4" },
|
||||
models: {
|
||||
"openai-codex/gpt-5.3-codex": {
|
||||
"openai-codex/gpt-5.4": {
|
||||
params: {
|
||||
transport: "auto",
|
||||
},
|
||||
@@ -106,7 +112,7 @@ OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.2": {
|
||||
"openai/gpt-5.4": {
|
||||
params: {
|
||||
openaiWsWarmup: false,
|
||||
},
|
||||
@@ -124,7 +130,7 @@ OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.2": {
|
||||
"openai/gpt-5.4": {
|
||||
params: {
|
||||
openaiWsWarmup: true,
|
||||
},
|
||||
@@ -135,6 +141,30 @@ OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
|
||||
}
|
||||
```
|
||||
|
||||
### OpenAI priority processing
|
||||
|
||||
OpenAI's API exposes priority processing via `service_tier=priority`. In
|
||||
OpenClaw, set `agents.defaults.models["openai/<model>"].params.serviceTier` to
|
||||
pass that field through on direct `openai/*` Responses requests.
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.4": {
|
||||
params: {
|
||||
serviceTier: "priority",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Supported values are `auto`, `default`, `flex`, and `priority`.
|
||||
|
||||
### OpenAI Responses server-side compaction
|
||||
|
||||
For direct OpenAI Responses models (`openai/*` using `api: "openai-responses"` with
|
||||
@@ -157,7 +187,7 @@ Responses models (for example Azure OpenAI Responses):
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"azure-openai-responses/gpt-5.2": {
|
||||
"azure-openai-responses/gpt-5.4": {
|
||||
params: {
|
||||
responsesServerCompaction: true,
|
||||
},
|
||||
@@ -175,7 +205,7 @@ Responses models (for example Azure OpenAI Responses):
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.2": {
|
||||
"openai/gpt-5.4": {
|
||||
params: {
|
||||
responsesServerCompaction: true,
|
||||
responsesCompactThreshold: 120000,
|
||||
@@ -194,7 +224,7 @@ Responses models (for example Azure OpenAI Responses):
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.2": {
|
||||
"openai/gpt-5.4": {
|
||||
params: {
|
||||
responsesServerCompaction: false,
|
||||
},
|
||||
|
||||
Reference in New Issue
Block a user