Vignesh Natarajan
a623c9c8d2
Onboarding: enforce custom model context minimum
2026-02-28 13:37:21 -08:00
Kunal Karmakar
720e1479b8
Remove temperature
2026-02-28 15:58:20 +05:30
Kunal Karmakar
2258e736b0
Reduce default max tokens
2026-02-28 15:58:20 +05:30
Kunal Karmakar
2fe5620763
Fix linting issue
2026-02-28 15:58:20 +05:30
Kunal Karmakar
4ed12c18a0
Conditional azure openai endpoint usage
2026-02-28 15:58:20 +05:30
Kunal Karmakar
06a3175cd1
Fix linting issue
2026-02-28 15:58:20 +05:30
Kunal Karmakar
955768d132
Fix default max tokens
2026-02-28 15:58:20 +05:30
Kunal Karmakar
978d9ae199
Fix azure openai endpoint validation
2026-02-28 15:58:20 +05:30
Sid
ee2eaddeb3
fix(onboard): increase verification timeout and reduce max_tokens for custom provider probes ( #27380 )
...
* fix(onboard): increase verification timeout and reduce max_tokens for custom provider probes
The onboard wizard sends a chat-completion request to verify custom
providers. With max_tokens: 1024 and a 10 s timeout, large local
models (e.g. Qwen3.5-27B on llama.cpp) routinely time out because
the server needs to load the model and generate up to 1024 tokens
before responding.
Changes:
- Raise VERIFY_TIMEOUT_MS from 10 s to 30 s
- Lower max_tokens from 1024 to 1 (verification only needs a single
token to confirm the API is reachable and the model ID is valid)
- Add explicit stream: false to both OpenAI and Anthropic probes
Closes #27346
Made-with: Cursor
* Changelog: note custom-provider onboarding verification fix
---------
Co-authored-by: Philipp Spiess <hello@philippspiess.com >
2026-02-27 22:51:58 +01:00
joshavant
5e3a86fd2f
feat(secrets): expand onboarding secret-ref flows and custom-provider parity
2026-02-26 14:47:22 +00:00
joshavant
b50c4c2c44
Gateway: add eager secrets runtime snapshot activation
2026-02-26 14:47:22 +00:00
Glucksberg
1565d7e7b3
fix: increase verification max_tokens to 1024 for Poe API compatibility
...
Poe API's Extended Thinking models (e.g. claude-sonnet-4.6) require
budget_tokens >= 1024. The previous values (5 for OpenAI, 16 for
Anthropic) caused HTTP 400 errors during provider verification.
Fixes #23433
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com >
2026-02-24 03:47:49 +00:00
Peter Steinberger
4bf67ab698
refactor(commands): centralize shared command formatting helpers
2026-02-22 21:19:09 +00:00
Jeremy Mumford
6ef365d062
resolved bug with doing a raw call to anthropic compatible apis ( #21336 )
2026-02-19 15:04:49 -08:00
Peter Steinberger
4f36c813a7
refactor(commands): share custom api verification request flow
2026-02-18 18:30:13 +00:00
Peter Steinberger
b8b43175c5
style: align formatting with oxfmt 0.33
2026-02-18 01:34:35 +00:00
Peter Steinberger
31f9be126c
style: run oxfmt and fix gate failures
2026-02-18 01:29:02 +00:00
cpojer
d0cb8c19b2
chore: wtf.
2026-02-17 13:36:48 +09:00
Sebastian
ed11e93cf2
chore(format)
2026-02-16 23:20:16 -05:00
cpojer
90ef2d6bdf
chore: Update formatting.
2026-02-17 09:18:40 +09:00
OpenClaw Bot
068260bbea
fix: add api-version query param for Azure verification
2026-02-17 00:00:08 +01:00
OpenClaw Bot
960cc11513
fix: add Azure AI Foundry URL support for custom providers
...
Detects Azure AI Foundry URLs (services.ai.azure.com and
openai.azure.com) and transforms them to include the proper
deployment path (/openai/deployments/<model-id>) required by
Azure's API. This fixes the 400 error when configuring OpenAI
models from Azure AI Foundry.
Fixes openclaw/openclaw#17992
2026-02-17 00:00:08 +01:00
Peter Steinberger
fef86e475b
refactor: dedupe shared helpers across ui/gateway/extensions
2026-02-15 03:34:14 +00:00
ENCHIGO
029b77c85b
onboard: support custom provider in non-interactive flow ( #14223 )
...
Merged via /review-pr -> /prepare-pr -> /merge-pr.
Prepared head SHA: 5b98d6514e
Co-authored-by: ENCHIGO <38551565+ENCHIGO@users.noreply.github.com >
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com >
Reviewed-by: @gumadeiras
2026-02-11 14:48:45 -05:00
Gustavo Madeira Santana
2914cb1d48
Onboard: rename Custom API Endpoint to Custom Provider
2026-02-10 07:36:04 -05:00
Blossom
c0befdee0b
feat(onboard): add custom/local API configuration flow ( #11106 )
...
* feat(onboard): add custom/local API configuration flow
* ci: retry macos check
* fix: expand custom API onboarding (#11106 ) (thanks @MackDing)
* fix: refine custom endpoint detection (#11106 ) (thanks @MackDing)
* fix: streamline custom endpoint onboarding (#11106 ) (thanks @MackDing)
* fix: skip model picker for custom endpoint (#11106 ) (thanks @MackDing)
* fix: avoid allowlist picker for custom endpoint (#11106 ) (thanks @MackDing)
* Onboard: reuse shared fetch timeout helper (#11106 ) (thanks @MackDing)
* Onboard: clarify default base URL name (#11106 ) (thanks @MackDing)
---------
Co-authored-by: OpenClaw Contributor <contributor@openclaw.ai >
Co-authored-by: Gustavo Madeira Santana <gumadeiras@gmail.com >
2026-02-10 07:31:02 -05:00