mirror of
https://github.com/openclaw/openclaw.git
synced 2026-03-30 04:55:44 +00:00
feat(onboard): add custom/local API configuration flow (#11106)
* feat(onboard): add custom/local API configuration flow * ci: retry macos check * fix: expand custom API onboarding (#11106) (thanks @MackDing) * fix: refine custom endpoint detection (#11106) (thanks @MackDing) * fix: streamline custom endpoint onboarding (#11106) (thanks @MackDing) * fix: skip model picker for custom endpoint (#11106) (thanks @MackDing) * fix: avoid allowlist picker for custom endpoint (#11106) (thanks @MackDing) * Onboard: reuse shared fetch timeout helper (#11106) (thanks @MackDing) * Onboard: clarify default base URL name (#11106) (thanks @MackDing) --------- Co-authored-by: OpenClaw Contributor <contributor@openclaw.ai> Co-authored-by: Gustavo Madeira Santana <gumadeiras@gmail.com>
This commit is contained in:
@@ -16,6 +16,7 @@ Docs: https://docs.openclaw.ai
|
|||||||
- Agents: include runtime shell in agent envelopes. (#1835) Thanks @Takhoffman.
|
- Agents: include runtime shell in agent envelopes. (#1835) Thanks @Takhoffman.
|
||||||
- Agents: auto-select `zai/glm-4.6v` for image understanding when ZAI is primary provider. (#10267) Thanks @liuy.
|
- Agents: auto-select `zai/glm-4.6v` for image understanding when ZAI is primary provider. (#10267) Thanks @liuy.
|
||||||
- Paths: add `OPENCLAW_HOME` for overriding the home directory used by internal path resolution. (#12091) Thanks @sebslight.
|
- Paths: add `OPENCLAW_HOME` for overriding the home directory used by internal path resolution. (#12091) Thanks @sebslight.
|
||||||
|
- Onboarding: add Custom API Endpoint flow for OpenAI and Anthropic-compatible endpoints. (#11106) Thanks @MackDing.
|
||||||
|
|
||||||
### Fixes
|
### Fixes
|
||||||
|
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ Interactive onboarding wizard (local or remote Gateway setup).
|
|||||||
## Related guides
|
## Related guides
|
||||||
|
|
||||||
- CLI onboarding hub: [Onboarding Wizard (CLI)](/start/wizard)
|
- CLI onboarding hub: [Onboarding Wizard (CLI)](/start/wizard)
|
||||||
|
- Onboarding overview: [Onboarding Overview](/start/onboarding-overview)
|
||||||
- CLI onboarding reference: [CLI Onboarding Reference](/start/wizard-cli-reference)
|
- CLI onboarding reference: [CLI Onboarding Reference](/start/wizard-cli-reference)
|
||||||
- CLI automation: [CLI Automation](/start/wizard-cli-automation)
|
- CLI automation: [CLI Automation](/start/wizard-cli-automation)
|
||||||
- macOS onboarding: [Onboarding (macOS App)](/start/onboarding)
|
- macOS onboarding: [Onboarding (macOS App)](/start/onboarding)
|
||||||
@@ -30,6 +31,8 @@ Flow notes:
|
|||||||
- `quickstart`: minimal prompts, auto-generates a gateway token.
|
- `quickstart`: minimal prompts, auto-generates a gateway token.
|
||||||
- `manual`: full prompts for port/bind/auth (alias of `advanced`).
|
- `manual`: full prompts for port/bind/auth (alias of `advanced`).
|
||||||
- Fastest first chat: `openclaw dashboard` (Control UI, no channel setup).
|
- Fastest first chat: `openclaw dashboard` (Control UI, no channel setup).
|
||||||
|
- Custom API Endpoint: connect any OpenAI or Anthropic compatible endpoint,
|
||||||
|
including hosted providers not listed. Use Unknown to auto-detect.
|
||||||
|
|
||||||
## Common follow-up commands
|
## Common follow-up commands
|
||||||
|
|
||||||
|
|||||||
@@ -802,7 +802,12 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"group": "First steps",
|
"group": "First steps",
|
||||||
"pages": ["start/getting-started", "start/wizard", "start/onboarding"]
|
"pages": [
|
||||||
|
"start/getting-started",
|
||||||
|
"start/onboarding-overview",
|
||||||
|
"start/wizard",
|
||||||
|
"start/onboarding"
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"group": "Guides",
|
"group": "Guides",
|
||||||
|
|||||||
51
docs/start/onboarding-overview.md
Normal file
51
docs/start/onboarding-overview.md
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
---
|
||||||
|
summary: "Overview of OpenClaw onboarding options and flows"
|
||||||
|
read_when:
|
||||||
|
- Choosing an onboarding path
|
||||||
|
- Setting up a new environment
|
||||||
|
title: "Onboarding Overview"
|
||||||
|
sidebarTitle: "Onboarding Overview"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Onboarding Overview
|
||||||
|
|
||||||
|
OpenClaw supports multiple onboarding paths depending on where the Gateway runs
|
||||||
|
and how you prefer to configure providers.
|
||||||
|
|
||||||
|
## Choose your onboarding path
|
||||||
|
|
||||||
|
- **CLI wizard** for macOS, Linux, and Windows (via WSL2).
|
||||||
|
- **macOS app** for a guided first run on Apple silicon or Intel Macs.
|
||||||
|
|
||||||
|
## CLI onboarding wizard
|
||||||
|
|
||||||
|
Run the wizard in a terminal:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openclaw onboard
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the CLI wizard when you want full control of the Gateway, workspace,
|
||||||
|
channels, and skills. Docs:
|
||||||
|
|
||||||
|
- [Onboarding Wizard (CLI)](/start/wizard)
|
||||||
|
- [`openclaw onboard` command](/cli/onboard)
|
||||||
|
|
||||||
|
## macOS app onboarding
|
||||||
|
|
||||||
|
Use the OpenClaw app when you want a fully guided setup on macOS. Docs:
|
||||||
|
|
||||||
|
- [Onboarding (macOS App)](/start/onboarding)
|
||||||
|
|
||||||
|
## Custom API Endpoint
|
||||||
|
|
||||||
|
If you need an endpoint that is not listed, including hosted providers that
|
||||||
|
expose standard OpenAI or Anthropic APIs, choose **Custom API Endpoint** in the
|
||||||
|
CLI wizard. You will be asked to:
|
||||||
|
|
||||||
|
- Pick OpenAI-compatible, Anthropic-compatible, or **Unknown** (auto-detect).
|
||||||
|
- Enter a base URL and API key (if required by the provider).
|
||||||
|
- Provide a model ID and optional alias.
|
||||||
|
- Choose an Endpoint ID so multiple custom endpoints can coexist.
|
||||||
|
|
||||||
|
For detailed steps, follow the CLI onboarding docs above.
|
||||||
@@ -12,6 +12,7 @@ sidebarTitle: "Onboarding: macOS App"
|
|||||||
This doc describes the **current** first‑run onboarding flow. The goal is a
|
This doc describes the **current** first‑run onboarding flow. The goal is a
|
||||||
smooth “day 0” experience: pick where the Gateway runs, connect auth, run the
|
smooth “day 0” experience: pick where the Gateway runs, connect auth, run the
|
||||||
wizard, and let the agent bootstrap itself.
|
wizard, and let the agent bootstrap itself.
|
||||||
|
For a general overview of onboarding paths, see [Onboarding Overview](/start/onboarding-overview).
|
||||||
|
|
||||||
<Steps>
|
<Steps>
|
||||||
<Step title="Approve macOS warning">
|
<Step title="Approve macOS warning">
|
||||||
|
|||||||
@@ -62,7 +62,8 @@ The wizard starts with **QuickStart** (defaults) vs **Advanced** (full control).
|
|||||||
|
|
||||||
**Local mode (default)** walks you through these steps:
|
**Local mode (default)** walks you through these steps:
|
||||||
|
|
||||||
1. **Model/Auth** — Anthropic API key (recommended), OAuth, OpenAI, or other providers. Pick a default model.
|
1. **Model/Auth** — Anthropic API key (recommended), OpenAI, or Custom API Endpoint
|
||||||
|
(OpenAI-compatible, Anthropic-compatible, or Unknown auto-detect). Pick a default model.
|
||||||
2. **Workspace** — Location for agent files (default `~/.openclaw/workspace`). Seeds bootstrap files.
|
2. **Workspace** — Location for agent files (default `~/.openclaw/workspace`). Seeds bootstrap files.
|
||||||
3. **Gateway** — Port, bind address, auth mode, Tailscale exposure.
|
3. **Gateway** — Port, bind address, auth mode, Tailscale exposure.
|
||||||
4. **Channels** — WhatsApp, Telegram, Discord, Google Chat, Mattermost, Signal, BlueBubbles, or iMessage.
|
4. **Channels** — WhatsApp, Telegram, Discord, Google Chat, Mattermost, Signal, BlueBubbles, or iMessage.
|
||||||
@@ -104,5 +105,6 @@ RPC API, and a full list of config fields the wizard writes, see the
|
|||||||
## Related docs
|
## Related docs
|
||||||
|
|
||||||
- CLI command reference: [`openclaw onboard`](/cli/onboard)
|
- CLI command reference: [`openclaw onboard`](/cli/onboard)
|
||||||
|
- Onboarding overview: [Onboarding Overview](/start/onboarding-overview)
|
||||||
- macOS app onboarding: [Onboarding](/start/onboarding)
|
- macOS app onboarding: [Onboarding](/start/onboarding)
|
||||||
- Agent first-run ritual: [Agent Bootstrapping](/start/bootstrapping)
|
- Agent first-run ritual: [Agent Bootstrapping](/start/bootstrapping)
|
||||||
|
|||||||
@@ -25,7 +25,8 @@ export type AuthChoiceGroupId =
|
|||||||
| "qwen"
|
| "qwen"
|
||||||
| "together"
|
| "together"
|
||||||
| "qianfan"
|
| "qianfan"
|
||||||
| "xai";
|
| "xai"
|
||||||
|
| "custom";
|
||||||
|
|
||||||
export type AuthChoiceGroup = {
|
export type AuthChoiceGroup = {
|
||||||
value: AuthChoiceGroupId;
|
value: AuthChoiceGroupId;
|
||||||
@@ -148,6 +149,12 @@ const AUTH_CHOICE_GROUP_DEFS: {
|
|||||||
hint: "Account ID + Gateway ID + API key",
|
hint: "Account ID + Gateway ID + API key",
|
||||||
choices: ["cloudflare-ai-gateway-api-key"],
|
choices: ["cloudflare-ai-gateway-api-key"],
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
value: "custom",
|
||||||
|
label: "Custom API Endpoint",
|
||||||
|
hint: "Any OpenAI or Anthropic compatible endpoint",
|
||||||
|
choices: ["custom-api-key"],
|
||||||
|
},
|
||||||
];
|
];
|
||||||
|
|
||||||
export function buildAuthChoiceOptions(params: {
|
export function buildAuthChoiceOptions(params: {
|
||||||
@@ -252,6 +259,8 @@ export function buildAuthChoiceOptions(params: {
|
|||||||
label: "MiniMax M2.1 Lightning",
|
label: "MiniMax M2.1 Lightning",
|
||||||
hint: "Faster, higher output cost",
|
hint: "Faster, higher output cost",
|
||||||
});
|
});
|
||||||
|
options.push({ value: "custom-api-key", label: "Custom API Endpoint" });
|
||||||
|
|
||||||
if (params.includeSkip) {
|
if (params.includeSkip) {
|
||||||
options.push({ value: "skip", label: "Skip for now" });
|
options.push({ value: "skip", label: "Skip for now" });
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -42,6 +42,10 @@ export async function promptAuthChoiceGrouped(params: {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (group.options.length === 1) {
|
||||||
|
return group.options[0].value;
|
||||||
|
}
|
||||||
|
|
||||||
const methodSelection = await params.prompter.select({
|
const methodSelection = await params.prompter.select({
|
||||||
message: `${group.label} auth method`,
|
message: `${group.label} auth method`,
|
||||||
options: [...group.options, { value: BACK_VALUE, label: "Back" }],
|
options: [...group.options, { value: BACK_VALUE, label: "Back" }],
|
||||||
|
|||||||
@@ -35,6 +35,7 @@ const PREFERRED_PROVIDER_BY_AUTH_CHOICE: Partial<Record<AuthChoice, string>> = {
|
|||||||
"qwen-portal": "qwen-portal",
|
"qwen-portal": "qwen-portal",
|
||||||
"minimax-portal": "minimax-portal",
|
"minimax-portal": "minimax-portal",
|
||||||
"qianfan-api-key": "qianfan",
|
"qianfan-api-key": "qianfan",
|
||||||
|
"custom-api-key": "custom",
|
||||||
};
|
};
|
||||||
|
|
||||||
export function resolvePreferredProviderForAuthChoice(choice: AuthChoice): string | undefined {
|
export function resolvePreferredProviderForAuthChoice(choice: AuthChoice): string | undefined {
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ import {
|
|||||||
promptDefaultModel,
|
promptDefaultModel,
|
||||||
promptModelAllowlist,
|
promptModelAllowlist,
|
||||||
} from "./model-picker.js";
|
} from "./model-picker.js";
|
||||||
|
import { promptCustomApiConfig } from "./onboard-custom.js";
|
||||||
|
|
||||||
type GatewayAuthChoice = "token" | "password";
|
type GatewayAuthChoice = "token" | "password";
|
||||||
|
|
||||||
@@ -53,7 +54,10 @@ export async function promptAuthConfig(
|
|||||||
});
|
});
|
||||||
|
|
||||||
let next = cfg;
|
let next = cfg;
|
||||||
if (authChoice !== "skip") {
|
if (authChoice === "custom-api-key") {
|
||||||
|
const customResult = await promptCustomApiConfig({ prompter, runtime, config: next });
|
||||||
|
next = customResult.config;
|
||||||
|
} else if (authChoice !== "skip") {
|
||||||
const applied = await applyAuthChoice({
|
const applied = await applyAuthChoice({
|
||||||
authChoice,
|
authChoice,
|
||||||
config: next,
|
config: next,
|
||||||
@@ -78,16 +82,18 @@ export async function promptAuthConfig(
|
|||||||
const anthropicOAuth =
|
const anthropicOAuth =
|
||||||
authChoice === "setup-token" || authChoice === "token" || authChoice === "oauth";
|
authChoice === "setup-token" || authChoice === "token" || authChoice === "oauth";
|
||||||
|
|
||||||
const allowlistSelection = await promptModelAllowlist({
|
if (authChoice !== "custom-api-key") {
|
||||||
config: next,
|
const allowlistSelection = await promptModelAllowlist({
|
||||||
prompter,
|
config: next,
|
||||||
allowedKeys: anthropicOAuth ? ANTHROPIC_OAUTH_MODEL_KEYS : undefined,
|
prompter,
|
||||||
initialSelections: anthropicOAuth ? ["anthropic/claude-opus-4-6"] : undefined,
|
allowedKeys: anthropicOAuth ? ANTHROPIC_OAUTH_MODEL_KEYS : undefined,
|
||||||
message: anthropicOAuth ? "Anthropic OAuth models" : undefined,
|
initialSelections: anthropicOAuth ? ["anthropic/claude-opus-4-6"] : undefined,
|
||||||
});
|
message: anthropicOAuth ? "Anthropic OAuth models" : undefined,
|
||||||
if (allowlistSelection.models) {
|
});
|
||||||
next = applyModelAllowlist(next, allowlistSelection.models);
|
if (allowlistSelection.models) {
|
||||||
next = applyModelFallbacksFromSelection(next, allowlistSelection.models);
|
next = applyModelAllowlist(next, allowlistSelection.models);
|
||||||
|
next = applyModelFallbacksFromSelection(next, allowlistSelection.models);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return next;
|
return next;
|
||||||
|
|||||||
270
src/commands/onboard-custom.test.ts
Normal file
270
src/commands/onboard-custom.test.ts
Normal file
@@ -0,0 +1,270 @@
|
|||||||
|
import { afterEach, describe, expect, it, vi } from "vitest";
|
||||||
|
import { defaultRuntime } from "../runtime.js";
|
||||||
|
import { promptCustomApiConfig } from "./onboard-custom.js";
|
||||||
|
|
||||||
|
// Mock dependencies
|
||||||
|
vi.mock("./model-picker.js", () => ({
|
||||||
|
applyPrimaryModel: vi.fn((cfg) => cfg),
|
||||||
|
}));
|
||||||
|
|
||||||
|
describe("promptCustomApiConfig", () => {
|
||||||
|
afterEach(() => {
|
||||||
|
vi.unstubAllGlobals();
|
||||||
|
vi.useRealTimers();
|
||||||
|
});
|
||||||
|
|
||||||
|
it("handles openai flow and saves alias", async () => {
|
||||||
|
const prompter = {
|
||||||
|
text: vi
|
||||||
|
.fn()
|
||||||
|
.mockResolvedValueOnce("http://localhost:11434/v1") // Base URL
|
||||||
|
.mockResolvedValueOnce("") // API Key
|
||||||
|
.mockResolvedValueOnce("llama3") // Model ID
|
||||||
|
.mockResolvedValueOnce("custom") // Endpoint ID
|
||||||
|
.mockResolvedValueOnce("local"), // Alias
|
||||||
|
progress: vi.fn(() => ({
|
||||||
|
update: vi.fn(),
|
||||||
|
stop: vi.fn(),
|
||||||
|
})),
|
||||||
|
select: vi.fn().mockResolvedValueOnce("openai"), // Compatibility
|
||||||
|
confirm: vi.fn(),
|
||||||
|
note: vi.fn(),
|
||||||
|
};
|
||||||
|
|
||||||
|
vi.stubGlobal(
|
||||||
|
"fetch",
|
||||||
|
vi.fn().mockResolvedValueOnce({
|
||||||
|
ok: true,
|
||||||
|
json: async () => ({}),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
const result = await promptCustomApiConfig({
|
||||||
|
prompter: prompter as unknown as Parameters<typeof promptCustomApiConfig>[0]["prompter"],
|
||||||
|
runtime: { ...defaultRuntime, log: vi.fn() },
|
||||||
|
config: {},
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(prompter.text).toHaveBeenCalledTimes(5);
|
||||||
|
expect(prompter.select).toHaveBeenCalledTimes(1);
|
||||||
|
expect(result.config.models?.providers?.custom?.api).toBe("openai-completions");
|
||||||
|
expect(result.config.agents?.defaults?.models?.["custom/llama3"]?.alias).toBe("local");
|
||||||
|
});
|
||||||
|
|
||||||
|
it("retries when verification fails", async () => {
|
||||||
|
const prompter = {
|
||||||
|
text: vi
|
||||||
|
.fn()
|
||||||
|
.mockResolvedValueOnce("http://localhost:11434/v1") // Base URL
|
||||||
|
.mockResolvedValueOnce("") // API Key
|
||||||
|
.mockResolvedValueOnce("bad-model") // Model ID
|
||||||
|
.mockResolvedValueOnce("good-model") // Model ID retry
|
||||||
|
.mockResolvedValueOnce("custom") // Endpoint ID
|
||||||
|
.mockResolvedValueOnce(""), // Alias
|
||||||
|
progress: vi.fn(() => ({
|
||||||
|
update: vi.fn(),
|
||||||
|
stop: vi.fn(),
|
||||||
|
})),
|
||||||
|
select: vi
|
||||||
|
.fn()
|
||||||
|
.mockResolvedValueOnce("openai") // Compatibility
|
||||||
|
.mockResolvedValueOnce("model"), // Retry choice
|
||||||
|
confirm: vi.fn(),
|
||||||
|
note: vi.fn(),
|
||||||
|
};
|
||||||
|
|
||||||
|
vi.stubGlobal(
|
||||||
|
"fetch",
|
||||||
|
vi
|
||||||
|
.fn()
|
||||||
|
.mockResolvedValueOnce({ ok: false, status: 400, json: async () => ({}) })
|
||||||
|
.mockResolvedValueOnce({ ok: true, json: async () => ({}) }),
|
||||||
|
);
|
||||||
|
|
||||||
|
await promptCustomApiConfig({
|
||||||
|
prompter: prompter as unknown as Parameters<typeof promptCustomApiConfig>[0]["prompter"],
|
||||||
|
runtime: { ...defaultRuntime, log: vi.fn() },
|
||||||
|
config: {},
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(prompter.text).toHaveBeenCalledTimes(6);
|
||||||
|
expect(prompter.select).toHaveBeenCalledTimes(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it("detects openai compatibility when unknown", async () => {
|
||||||
|
const prompter = {
|
||||||
|
text: vi
|
||||||
|
.fn()
|
||||||
|
.mockResolvedValueOnce("https://example.com/v1") // Base URL
|
||||||
|
.mockResolvedValueOnce("test-key") // API Key
|
||||||
|
.mockResolvedValueOnce("detected-model") // Model ID
|
||||||
|
.mockResolvedValueOnce("custom") // Endpoint ID
|
||||||
|
.mockResolvedValueOnce("alias"), // Alias
|
||||||
|
progress: vi.fn(() => ({
|
||||||
|
update: vi.fn(),
|
||||||
|
stop: vi.fn(),
|
||||||
|
})),
|
||||||
|
select: vi.fn().mockResolvedValueOnce("unknown"),
|
||||||
|
confirm: vi.fn(),
|
||||||
|
note: vi.fn(),
|
||||||
|
};
|
||||||
|
|
||||||
|
vi.stubGlobal(
|
||||||
|
"fetch",
|
||||||
|
vi.fn().mockResolvedValueOnce({
|
||||||
|
ok: true,
|
||||||
|
json: async () => ({}),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
const result = await promptCustomApiConfig({
|
||||||
|
prompter: prompter as unknown as Parameters<typeof promptCustomApiConfig>[0]["prompter"],
|
||||||
|
runtime: { ...defaultRuntime, log: vi.fn() },
|
||||||
|
config: {},
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(prompter.text).toHaveBeenCalledTimes(5);
|
||||||
|
expect(prompter.select).toHaveBeenCalledTimes(1);
|
||||||
|
expect(result.config.models?.providers?.custom?.api).toBe("openai-completions");
|
||||||
|
});
|
||||||
|
|
||||||
|
it("re-prompts base url when unknown detection fails", async () => {
|
||||||
|
const prompter = {
|
||||||
|
text: vi
|
||||||
|
.fn()
|
||||||
|
.mockResolvedValueOnce("https://bad.example.com/v1") // Base URL #1
|
||||||
|
.mockResolvedValueOnce("bad-key") // API Key #1
|
||||||
|
.mockResolvedValueOnce("bad-model") // Model ID #1
|
||||||
|
.mockResolvedValueOnce("https://ok.example.com/v1") // Base URL #2
|
||||||
|
.mockResolvedValueOnce("ok-key") // API Key #2
|
||||||
|
.mockResolvedValueOnce("custom") // Endpoint ID
|
||||||
|
.mockResolvedValueOnce(""), // Alias
|
||||||
|
progress: vi.fn(() => ({
|
||||||
|
update: vi.fn(),
|
||||||
|
stop: vi.fn(),
|
||||||
|
})),
|
||||||
|
select: vi.fn().mockResolvedValueOnce("unknown").mockResolvedValueOnce("baseUrl"),
|
||||||
|
confirm: vi.fn(),
|
||||||
|
note: vi.fn(),
|
||||||
|
};
|
||||||
|
|
||||||
|
vi.stubGlobal(
|
||||||
|
"fetch",
|
||||||
|
vi
|
||||||
|
.fn()
|
||||||
|
.mockResolvedValueOnce({ ok: false, status: 404, json: async () => ({}) })
|
||||||
|
.mockResolvedValueOnce({ ok: false, status: 404, json: async () => ({}) })
|
||||||
|
.mockResolvedValueOnce({ ok: true, json: async () => ({}) }),
|
||||||
|
);
|
||||||
|
|
||||||
|
await promptCustomApiConfig({
|
||||||
|
prompter: prompter as unknown as Parameters<typeof promptCustomApiConfig>[0]["prompter"],
|
||||||
|
runtime: { ...defaultRuntime, log: vi.fn() },
|
||||||
|
config: {},
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(prompter.note).toHaveBeenCalledWith(
|
||||||
|
expect.stringContaining("did not respond"),
|
||||||
|
"Endpoint detection",
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it("renames provider id when baseUrl differs", async () => {
|
||||||
|
const prompter = {
|
||||||
|
text: vi
|
||||||
|
.fn()
|
||||||
|
.mockResolvedValueOnce("http://localhost:11434/v1") // Base URL
|
||||||
|
.mockResolvedValueOnce("") // API Key
|
||||||
|
.mockResolvedValueOnce("llama3") // Model ID
|
||||||
|
.mockResolvedValueOnce("custom") // Endpoint ID
|
||||||
|
.mockResolvedValueOnce(""), // Alias
|
||||||
|
progress: vi.fn(() => ({
|
||||||
|
update: vi.fn(),
|
||||||
|
stop: vi.fn(),
|
||||||
|
})),
|
||||||
|
select: vi.fn().mockResolvedValueOnce("openai"),
|
||||||
|
confirm: vi.fn(),
|
||||||
|
note: vi.fn(),
|
||||||
|
};
|
||||||
|
|
||||||
|
vi.stubGlobal(
|
||||||
|
"fetch",
|
||||||
|
vi.fn().mockResolvedValueOnce({
|
||||||
|
ok: true,
|
||||||
|
json: async () => ({}),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
const result = await promptCustomApiConfig({
|
||||||
|
prompter: prompter as unknown as Parameters<typeof promptCustomApiConfig>[0]["prompter"],
|
||||||
|
runtime: { ...defaultRuntime, log: vi.fn() },
|
||||||
|
config: {
|
||||||
|
models: {
|
||||||
|
providers: {
|
||||||
|
custom: {
|
||||||
|
baseUrl: "http://old.example.com/v1",
|
||||||
|
api: "openai-completions",
|
||||||
|
models: [
|
||||||
|
{
|
||||||
|
id: "old-model",
|
||||||
|
name: "Old",
|
||||||
|
contextWindow: 1,
|
||||||
|
maxTokens: 1,
|
||||||
|
input: ["text"],
|
||||||
|
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||||
|
reasoning: false,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.providerId).toBe("custom-2");
|
||||||
|
expect(result.config.models?.providers?.custom).toBeDefined();
|
||||||
|
expect(result.config.models?.providers?.["custom-2"]).toBeDefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it("aborts verification after timeout", async () => {
|
||||||
|
vi.useFakeTimers();
|
||||||
|
const prompter = {
|
||||||
|
text: vi
|
||||||
|
.fn()
|
||||||
|
.mockResolvedValueOnce("http://localhost:11434/v1") // Base URL
|
||||||
|
.mockResolvedValueOnce("") // API Key
|
||||||
|
.mockResolvedValueOnce("slow-model") // Model ID
|
||||||
|
.mockResolvedValueOnce("fast-model") // Model ID retry
|
||||||
|
.mockResolvedValueOnce("custom") // Endpoint ID
|
||||||
|
.mockResolvedValueOnce(""), // Alias
|
||||||
|
progress: vi.fn(() => ({
|
||||||
|
update: vi.fn(),
|
||||||
|
stop: vi.fn(),
|
||||||
|
})),
|
||||||
|
select: vi.fn().mockResolvedValueOnce("openai").mockResolvedValueOnce("model"),
|
||||||
|
confirm: vi.fn(),
|
||||||
|
note: vi.fn(),
|
||||||
|
};
|
||||||
|
|
||||||
|
const fetchMock = vi
|
||||||
|
.fn()
|
||||||
|
.mockImplementationOnce((_url: string, init?: { signal?: AbortSignal }) => {
|
||||||
|
return new Promise((_resolve, reject) => {
|
||||||
|
init?.signal?.addEventListener("abort", () => reject(new Error("AbortError")));
|
||||||
|
});
|
||||||
|
})
|
||||||
|
.mockResolvedValueOnce({ ok: true, json: async () => ({}) });
|
||||||
|
vi.stubGlobal("fetch", fetchMock);
|
||||||
|
|
||||||
|
const promise = promptCustomApiConfig({
|
||||||
|
prompter: prompter as unknown as Parameters<typeof promptCustomApiConfig>[0]["prompter"],
|
||||||
|
runtime: { ...defaultRuntime, log: vi.fn() },
|
||||||
|
config: {},
|
||||||
|
});
|
||||||
|
|
||||||
|
await vi.advanceTimersByTimeAsync(10000);
|
||||||
|
await promise;
|
||||||
|
|
||||||
|
expect(prompter.text).toHaveBeenCalledTimes(6);
|
||||||
|
});
|
||||||
|
});
|
||||||
476
src/commands/onboard-custom.ts
Normal file
476
src/commands/onboard-custom.ts
Normal file
@@ -0,0 +1,476 @@
|
|||||||
|
import type { OpenClawConfig } from "../config/config.js";
|
||||||
|
import type { ModelProviderConfig } from "../config/types.models.js";
|
||||||
|
import type { RuntimeEnv } from "../runtime.js";
|
||||||
|
import type { WizardPrompter } from "../wizard/prompts.js";
|
||||||
|
import { DEFAULT_PROVIDER } from "../agents/defaults.js";
|
||||||
|
import { buildModelAliasIndex, modelKey } from "../agents/model-selection.js";
|
||||||
|
import { fetchWithTimeout } from "../utils/fetch-timeout.js";
|
||||||
|
import { applyPrimaryModel } from "./model-picker.js";
|
||||||
|
import { normalizeAlias } from "./models/shared.js";
|
||||||
|
|
||||||
|
const DEFAULT_OLLAMA_BASE_URL = "http://127.0.0.1:11434/v1";
|
||||||
|
const DEFAULT_CONTEXT_WINDOW = 4096;
|
||||||
|
const DEFAULT_MAX_TOKENS = 4096;
|
||||||
|
const VERIFY_TIMEOUT_MS = 10000;
|
||||||
|
|
||||||
|
type CustomApiCompatibility = "openai" | "anthropic";
|
||||||
|
type CustomApiCompatibilityChoice = CustomApiCompatibility | "unknown";
|
||||||
|
type CustomApiResult = {
|
||||||
|
config: OpenClawConfig;
|
||||||
|
providerId?: string;
|
||||||
|
modelId?: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
const COMPATIBILITY_OPTIONS: Array<{
|
||||||
|
value: CustomApiCompatibilityChoice;
|
||||||
|
label: string;
|
||||||
|
hint: string;
|
||||||
|
api?: "openai-completions" | "anthropic-messages";
|
||||||
|
}> = [
|
||||||
|
{
|
||||||
|
value: "openai",
|
||||||
|
label: "OpenAI-compatible",
|
||||||
|
hint: "Uses /chat/completions",
|
||||||
|
api: "openai-completions",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
value: "anthropic",
|
||||||
|
label: "Anthropic-compatible",
|
||||||
|
hint: "Uses /messages",
|
||||||
|
api: "anthropic-messages",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
value: "unknown",
|
||||||
|
label: "Unknown (detect automatically)",
|
||||||
|
hint: "Probes OpenAI then Anthropic endpoints",
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
function normalizeEndpointId(raw: string): string {
|
||||||
|
const trimmed = raw.trim().toLowerCase();
|
||||||
|
if (!trimmed) {
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
return trimmed.replace(/[^a-z0-9-]+/g, "-").replace(/^-+|-+$/g, "");
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildEndpointIdFromUrl(baseUrl: string): string {
|
||||||
|
try {
|
||||||
|
const url = new URL(baseUrl);
|
||||||
|
const host = url.hostname.replace(/[^a-z0-9]+/gi, "-").toLowerCase();
|
||||||
|
const port = url.port ? `-${url.port}` : "";
|
||||||
|
const candidate = `custom-${host}${port}`;
|
||||||
|
return normalizeEndpointId(candidate) || "custom";
|
||||||
|
} catch {
|
||||||
|
return "custom";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function resolveUniqueEndpointId(params: {
|
||||||
|
requestedId: string;
|
||||||
|
baseUrl: string;
|
||||||
|
providers: Record<string, ModelProviderConfig | undefined>;
|
||||||
|
}) {
|
||||||
|
const normalized = normalizeEndpointId(params.requestedId) || "custom";
|
||||||
|
const existing = params.providers[normalized];
|
||||||
|
if (!existing?.baseUrl || existing.baseUrl === params.baseUrl) {
|
||||||
|
return { providerId: normalized, renamed: false };
|
||||||
|
}
|
||||||
|
let suffix = 2;
|
||||||
|
let candidate = `${normalized}-${suffix}`;
|
||||||
|
while (params.providers[candidate]) {
|
||||||
|
suffix += 1;
|
||||||
|
candidate = `${normalized}-${suffix}`;
|
||||||
|
}
|
||||||
|
return { providerId: candidate, renamed: true };
|
||||||
|
}
|
||||||
|
|
||||||
|
function resolveAliasError(params: {
|
||||||
|
raw: string;
|
||||||
|
cfg: OpenClawConfig;
|
||||||
|
modelRef: string;
|
||||||
|
}): string | undefined {
|
||||||
|
const trimmed = params.raw.trim();
|
||||||
|
if (!trimmed) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
let normalized: string;
|
||||||
|
try {
|
||||||
|
normalized = normalizeAlias(trimmed);
|
||||||
|
} catch (err) {
|
||||||
|
return err instanceof Error ? err.message : "Alias is invalid.";
|
||||||
|
}
|
||||||
|
const aliasIndex = buildModelAliasIndex({
|
||||||
|
cfg: params.cfg,
|
||||||
|
defaultProvider: DEFAULT_PROVIDER,
|
||||||
|
});
|
||||||
|
const aliasKey = normalized.toLowerCase();
|
||||||
|
const existing = aliasIndex.byAlias.get(aliasKey);
|
||||||
|
if (!existing) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
const existingKey = modelKey(existing.ref.provider, existing.ref.model);
|
||||||
|
if (existingKey === params.modelRef) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
return `Alias ${normalized} already points to ${existingKey}.`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildOpenAiHeaders(apiKey: string) {
|
||||||
|
const headers: Record<string, string> = {};
|
||||||
|
if (apiKey) {
|
||||||
|
headers.Authorization = `Bearer ${apiKey}`;
|
||||||
|
}
|
||||||
|
return headers;
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildAnthropicHeaders(apiKey: string) {
|
||||||
|
const headers: Record<string, string> = {
|
||||||
|
"anthropic-version": "2023-06-01",
|
||||||
|
};
|
||||||
|
if (apiKey) {
|
||||||
|
headers["x-api-key"] = apiKey;
|
||||||
|
}
|
||||||
|
return headers;
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatVerificationError(error: unknown): string {
|
||||||
|
if (!error) {
|
||||||
|
return "unknown error";
|
||||||
|
}
|
||||||
|
if (error instanceof Error) {
|
||||||
|
return error.message;
|
||||||
|
}
|
||||||
|
if (typeof error === "string") {
|
||||||
|
return error;
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
return JSON.stringify(error);
|
||||||
|
} catch {
|
||||||
|
return "unknown error";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type VerificationResult = {
|
||||||
|
ok: boolean;
|
||||||
|
status?: number;
|
||||||
|
error?: unknown;
|
||||||
|
};
|
||||||
|
|
||||||
|
async function requestOpenAiVerification(params: {
|
||||||
|
baseUrl: string;
|
||||||
|
apiKey: string;
|
||||||
|
modelId: string;
|
||||||
|
}): Promise<VerificationResult> {
|
||||||
|
const endpoint = new URL(
|
||||||
|
"chat/completions",
|
||||||
|
params.baseUrl.endsWith("/") ? params.baseUrl : `${params.baseUrl}/`,
|
||||||
|
).href;
|
||||||
|
try {
|
||||||
|
const res = await fetchWithTimeout(
|
||||||
|
endpoint,
|
||||||
|
{
|
||||||
|
method: "POST",
|
||||||
|
headers: {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
...buildOpenAiHeaders(params.apiKey),
|
||||||
|
},
|
||||||
|
body: JSON.stringify({
|
||||||
|
model: params.modelId,
|
||||||
|
messages: [{ role: "user", content: "Hi" }],
|
||||||
|
max_tokens: 5,
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
VERIFY_TIMEOUT_MS,
|
||||||
|
);
|
||||||
|
return { ok: res.ok, status: res.status };
|
||||||
|
} catch (error) {
|
||||||
|
return { ok: false, error };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function requestAnthropicVerification(params: {
|
||||||
|
baseUrl: string;
|
||||||
|
apiKey: string;
|
||||||
|
modelId: string;
|
||||||
|
}): Promise<VerificationResult> {
|
||||||
|
const endpoint = new URL(
|
||||||
|
"messages",
|
||||||
|
params.baseUrl.endsWith("/") ? params.baseUrl : `${params.baseUrl}/`,
|
||||||
|
).href;
|
||||||
|
try {
|
||||||
|
const res = await fetchWithTimeout(
|
||||||
|
endpoint,
|
||||||
|
{
|
||||||
|
method: "POST",
|
||||||
|
headers: {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
...buildAnthropicHeaders(params.apiKey),
|
||||||
|
},
|
||||||
|
body: JSON.stringify({
|
||||||
|
model: params.modelId,
|
||||||
|
max_tokens: 16,
|
||||||
|
messages: [{ role: "user", content: "Hi" }],
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
VERIFY_TIMEOUT_MS,
|
||||||
|
);
|
||||||
|
return { ok: res.ok, status: res.status };
|
||||||
|
} catch (error) {
|
||||||
|
return { ok: false, error };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function promptBaseUrlAndKey(params: {
|
||||||
|
prompter: WizardPrompter;
|
||||||
|
initialBaseUrl?: string;
|
||||||
|
}): Promise<{ baseUrl: string; apiKey: string }> {
|
||||||
|
const baseUrlInput = await params.prompter.text({
|
||||||
|
message: "API Base URL",
|
||||||
|
initialValue: params.initialBaseUrl ?? DEFAULT_OLLAMA_BASE_URL,
|
||||||
|
placeholder: "https://api.example.com/v1",
|
||||||
|
validate: (val) => {
|
||||||
|
try {
|
||||||
|
new URL(val);
|
||||||
|
return undefined;
|
||||||
|
} catch {
|
||||||
|
return "Please enter a valid URL (e.g. http://...)";
|
||||||
|
}
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const apiKeyInput = await params.prompter.text({
|
||||||
|
message: "API Key (leave blank if not required)",
|
||||||
|
placeholder: "sk-...",
|
||||||
|
initialValue: "",
|
||||||
|
});
|
||||||
|
return { baseUrl: baseUrlInput.trim(), apiKey: apiKeyInput.trim() };
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function promptCustomApiConfig(params: {
|
||||||
|
prompter: WizardPrompter;
|
||||||
|
runtime: RuntimeEnv;
|
||||||
|
config: OpenClawConfig;
|
||||||
|
}): Promise<CustomApiResult> {
|
||||||
|
const { prompter, runtime, config } = params;
|
||||||
|
|
||||||
|
const baseInput = await promptBaseUrlAndKey({ prompter });
|
||||||
|
let baseUrl = baseInput.baseUrl;
|
||||||
|
let apiKey = baseInput.apiKey;
|
||||||
|
|
||||||
|
const compatibilityChoice = await prompter.select({
|
||||||
|
message: "Endpoint compatibility",
|
||||||
|
options: COMPATIBILITY_OPTIONS.map((option) => ({
|
||||||
|
value: option.value,
|
||||||
|
label: option.label,
|
||||||
|
hint: option.hint,
|
||||||
|
})),
|
||||||
|
});
|
||||||
|
|
||||||
|
let modelId = (
|
||||||
|
await prompter.text({
|
||||||
|
message: "Model ID",
|
||||||
|
placeholder: "e.g. llama3, claude-3-7-sonnet",
|
||||||
|
validate: (val) => (val.trim() ? undefined : "Model ID is required"),
|
||||||
|
})
|
||||||
|
).trim();
|
||||||
|
|
||||||
|
let compatibility: CustomApiCompatibility | null =
|
||||||
|
compatibilityChoice === "unknown" ? null : compatibilityChoice;
|
||||||
|
let providerApi =
|
||||||
|
COMPATIBILITY_OPTIONS.find((entry) => entry.value === compatibility)?.api ??
|
||||||
|
"openai-completions";
|
||||||
|
|
||||||
|
while (true) {
|
||||||
|
let verifiedFromProbe = false;
|
||||||
|
if (!compatibility) {
|
||||||
|
const probeSpinner = prompter.progress("Detecting endpoint type...");
|
||||||
|
const openaiProbe = await requestOpenAiVerification({ baseUrl, apiKey, modelId });
|
||||||
|
if (openaiProbe.ok) {
|
||||||
|
probeSpinner.stop("Detected OpenAI-compatible endpoint.");
|
||||||
|
compatibility = "openai";
|
||||||
|
providerApi = "openai-completions";
|
||||||
|
verifiedFromProbe = true;
|
||||||
|
} else {
|
||||||
|
const anthropicProbe = await requestAnthropicVerification({ baseUrl, apiKey, modelId });
|
||||||
|
if (anthropicProbe.ok) {
|
||||||
|
probeSpinner.stop("Detected Anthropic-compatible endpoint.");
|
||||||
|
compatibility = "anthropic";
|
||||||
|
providerApi = "anthropic-messages";
|
||||||
|
verifiedFromProbe = true;
|
||||||
|
} else {
|
||||||
|
probeSpinner.stop("Could not detect endpoint type.");
|
||||||
|
await prompter.note(
|
||||||
|
"This endpoint did not respond to OpenAI or Anthropic style requests.",
|
||||||
|
"Endpoint detection",
|
||||||
|
);
|
||||||
|
const retryChoice = await prompter.select({
|
||||||
|
message: "What would you like to change?",
|
||||||
|
options: [
|
||||||
|
{ value: "baseUrl", label: "Change base URL" },
|
||||||
|
{ value: "model", label: "Change model" },
|
||||||
|
{ value: "both", label: "Change base URL and model" },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
if (retryChoice === "baseUrl" || retryChoice === "both") {
|
||||||
|
const retryInput = await promptBaseUrlAndKey({
|
||||||
|
prompter,
|
||||||
|
initialBaseUrl: baseUrl,
|
||||||
|
});
|
||||||
|
baseUrl = retryInput.baseUrl;
|
||||||
|
apiKey = retryInput.apiKey;
|
||||||
|
}
|
||||||
|
if (retryChoice === "model" || retryChoice === "both") {
|
||||||
|
modelId = (
|
||||||
|
await prompter.text({
|
||||||
|
message: "Model ID",
|
||||||
|
placeholder: "e.g. llama3, claude-3-7-sonnet",
|
||||||
|
validate: (val) => (val.trim() ? undefined : "Model ID is required"),
|
||||||
|
})
|
||||||
|
).trim();
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (verifiedFromProbe) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
const verifySpinner = prompter.progress("Verifying...");
|
||||||
|
const result =
|
||||||
|
compatibility === "anthropic"
|
||||||
|
? await requestAnthropicVerification({ baseUrl, apiKey, modelId })
|
||||||
|
: await requestOpenAiVerification({ baseUrl, apiKey, modelId });
|
||||||
|
if (result.ok) {
|
||||||
|
verifySpinner.stop("Verification successful.");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
if (result.status !== undefined) {
|
||||||
|
verifySpinner.stop(`Verification failed: status ${result.status}`);
|
||||||
|
} else {
|
||||||
|
verifySpinner.stop(`Verification failed: ${formatVerificationError(result.error)}`);
|
||||||
|
}
|
||||||
|
const retryChoice = await prompter.select({
|
||||||
|
message: "What would you like to change?",
|
||||||
|
options: [
|
||||||
|
{ value: "baseUrl", label: "Change base URL" },
|
||||||
|
{ value: "model", label: "Change model" },
|
||||||
|
{ value: "both", label: "Change base URL and model" },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
if (retryChoice === "baseUrl" || retryChoice === "both") {
|
||||||
|
const retryInput = await promptBaseUrlAndKey({
|
||||||
|
prompter,
|
||||||
|
initialBaseUrl: baseUrl,
|
||||||
|
});
|
||||||
|
baseUrl = retryInput.baseUrl;
|
||||||
|
apiKey = retryInput.apiKey;
|
||||||
|
}
|
||||||
|
if (retryChoice === "model" || retryChoice === "both") {
|
||||||
|
modelId = (
|
||||||
|
await prompter.text({
|
||||||
|
message: "Model ID",
|
||||||
|
placeholder: "e.g. llama3, claude-3-7-sonnet",
|
||||||
|
validate: (val) => (val.trim() ? undefined : "Model ID is required"),
|
||||||
|
})
|
||||||
|
).trim();
|
||||||
|
}
|
||||||
|
if (compatibilityChoice === "unknown") {
|
||||||
|
compatibility = null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const providers = config.models?.providers ?? {};
|
||||||
|
const suggestedId = buildEndpointIdFromUrl(baseUrl);
|
||||||
|
const providerIdInput = await prompter.text({
|
||||||
|
message: "Endpoint ID",
|
||||||
|
initialValue: suggestedId,
|
||||||
|
placeholder: "custom",
|
||||||
|
validate: (value) => {
|
||||||
|
const normalized = normalizeEndpointId(value);
|
||||||
|
if (!normalized) {
|
||||||
|
return "Endpoint ID is required.";
|
||||||
|
}
|
||||||
|
return undefined;
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const providerIdResult = resolveUniqueEndpointId({
|
||||||
|
requestedId: providerIdInput,
|
||||||
|
baseUrl,
|
||||||
|
providers,
|
||||||
|
});
|
||||||
|
if (providerIdResult.renamed) {
|
||||||
|
await prompter.note(
|
||||||
|
`Endpoint ID "${providerIdInput}" already exists for a different base URL. Using "${providerIdResult.providerId}".`,
|
||||||
|
"Endpoint ID",
|
||||||
|
);
|
||||||
|
}
|
||||||
|
const providerId = providerIdResult.providerId;
|
||||||
|
|
||||||
|
const modelRef = modelKey(providerId, modelId);
|
||||||
|
const aliasInput = await prompter.text({
|
||||||
|
message: "Model alias (optional)",
|
||||||
|
placeholder: "e.g. local, ollama",
|
||||||
|
initialValue: "",
|
||||||
|
validate: (value) => resolveAliasError({ raw: value, cfg: config, modelRef }),
|
||||||
|
});
|
||||||
|
const alias = aliasInput.trim();
|
||||||
|
|
||||||
|
const existingProvider = providers[providerId];
|
||||||
|
const existingModels = Array.isArray(existingProvider?.models) ? existingProvider.models : [];
|
||||||
|
const hasModel = existingModels.some((model) => model.id === modelId);
|
||||||
|
const nextModel = {
|
||||||
|
id: modelId,
|
||||||
|
name: `${modelId} (Custom API)`,
|
||||||
|
contextWindow: DEFAULT_CONTEXT_WINDOW,
|
||||||
|
maxTokens: DEFAULT_MAX_TOKENS,
|
||||||
|
input: ["text"] as ["text"],
|
||||||
|
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||||
|
reasoning: false,
|
||||||
|
};
|
||||||
|
const mergedModels = hasModel ? existingModels : [...existingModels, nextModel];
|
||||||
|
const { apiKey: existingApiKey, ...existingProviderRest } = existingProvider ?? {};
|
||||||
|
const normalizedApiKey = apiKey.trim() || (existingApiKey ? existingApiKey.trim() : undefined);
|
||||||
|
|
||||||
|
let newConfig: OpenClawConfig = {
|
||||||
|
...config,
|
||||||
|
models: {
|
||||||
|
...config.models,
|
||||||
|
mode: config.models?.mode ?? "merge",
|
||||||
|
providers: {
|
||||||
|
...providers,
|
||||||
|
[providerId]: {
|
||||||
|
...existingProviderRest,
|
||||||
|
baseUrl,
|
||||||
|
api: providerApi,
|
||||||
|
...(normalizedApiKey ? { apiKey: normalizedApiKey } : {}),
|
||||||
|
models: mergedModels.length > 0 ? mergedModels : [nextModel],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
newConfig = applyPrimaryModel(newConfig, modelRef);
|
||||||
|
if (alias) {
|
||||||
|
newConfig = {
|
||||||
|
...newConfig,
|
||||||
|
agents: {
|
||||||
|
...newConfig.agents,
|
||||||
|
defaults: {
|
||||||
|
...newConfig.agents?.defaults,
|
||||||
|
models: {
|
||||||
|
...newConfig.agents?.defaults?.models,
|
||||||
|
[modelRef]: {
|
||||||
|
...newConfig.agents?.defaults?.models?.[modelRef],
|
||||||
|
alias,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
runtime.log(`Configured custom provider: ${providerId}/${modelId}`);
|
||||||
|
return { config: newConfig, providerId, modelId };
|
||||||
|
}
|
||||||
@@ -38,7 +38,27 @@ export type AuthChoice =
|
|||||||
| "qwen-portal"
|
| "qwen-portal"
|
||||||
| "xai-api-key"
|
| "xai-api-key"
|
||||||
| "qianfan-api-key"
|
| "qianfan-api-key"
|
||||||
|
| "custom-api-key"
|
||||||
| "skip";
|
| "skip";
|
||||||
|
export type AuthChoiceGroupId =
|
||||||
|
| "openai"
|
||||||
|
| "anthropic"
|
||||||
|
| "google"
|
||||||
|
| "copilot"
|
||||||
|
| "openrouter"
|
||||||
|
| "ai-gateway"
|
||||||
|
| "cloudflare-ai-gateway"
|
||||||
|
| "moonshot"
|
||||||
|
| "zai"
|
||||||
|
| "xiaomi"
|
||||||
|
| "opencode-zen"
|
||||||
|
| "minimax"
|
||||||
|
| "synthetic"
|
||||||
|
| "venice"
|
||||||
|
| "qwen"
|
||||||
|
| "qianfan"
|
||||||
|
| "xai"
|
||||||
|
| "custom";
|
||||||
export type GatewayAuthChoice = "token" | "password";
|
export type GatewayAuthChoice = "token" | "password";
|
||||||
export type ResetScope = "config" | "config+creds+sessions" | "full";
|
export type ResetScope = "config" | "config+creds+sessions" | "full";
|
||||||
export type GatewayBind = "loopback" | "lan" | "auto" | "custom" | "tailnet";
|
export type GatewayBind = "loopback" | "lan" | "auto" | "custom" | "tailnet";
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ import {
|
|||||||
} from "../commands/auth-choice.js";
|
} from "../commands/auth-choice.js";
|
||||||
import { applyPrimaryModel, promptDefaultModel } from "../commands/model-picker.js";
|
import { applyPrimaryModel, promptDefaultModel } from "../commands/model-picker.js";
|
||||||
import { setupChannels } from "../commands/onboard-channels.js";
|
import { setupChannels } from "../commands/onboard-channels.js";
|
||||||
|
import { promptCustomApiConfig } from "../commands/onboard-custom.js";
|
||||||
import {
|
import {
|
||||||
applyWizardMetadata,
|
applyWizardMetadata,
|
||||||
DEFAULT_WORKSPACE,
|
DEFAULT_WORKSPACE,
|
||||||
@@ -378,26 +379,38 @@ export async function runOnboardingWizard(
|
|||||||
includeSkip: true,
|
includeSkip: true,
|
||||||
}));
|
}));
|
||||||
|
|
||||||
const authResult = await applyAuthChoice({
|
let customPreferredProvider: string | undefined;
|
||||||
authChoice,
|
if (authChoice === "custom-api-key") {
|
||||||
config: nextConfig,
|
const customResult = await promptCustomApiConfig({
|
||||||
prompter,
|
prompter,
|
||||||
runtime,
|
runtime,
|
||||||
setDefaultModel: true,
|
config: nextConfig,
|
||||||
opts: {
|
});
|
||||||
tokenProvider: opts.tokenProvider,
|
nextConfig = customResult.config;
|
||||||
token: opts.authChoice === "apiKey" && opts.token ? opts.token : undefined,
|
customPreferredProvider = customResult.providerId;
|
||||||
},
|
} else {
|
||||||
});
|
const authResult = await applyAuthChoice({
|
||||||
nextConfig = authResult.config;
|
authChoice,
|
||||||
|
config: nextConfig,
|
||||||
|
prompter,
|
||||||
|
runtime,
|
||||||
|
setDefaultModel: true,
|
||||||
|
opts: {
|
||||||
|
tokenProvider: opts.tokenProvider,
|
||||||
|
token: opts.authChoice === "apiKey" && opts.token ? opts.token : undefined,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
nextConfig = authResult.config;
|
||||||
|
}
|
||||||
|
|
||||||
if (authChoiceFromPrompt) {
|
if (authChoiceFromPrompt && authChoice !== "custom-api-key") {
|
||||||
const modelSelection = await promptDefaultModel({
|
const modelSelection = await promptDefaultModel({
|
||||||
config: nextConfig,
|
config: nextConfig,
|
||||||
prompter,
|
prompter,
|
||||||
allowKeep: true,
|
allowKeep: true,
|
||||||
ignoreAllowlist: true,
|
ignoreAllowlist: true,
|
||||||
preferredProvider: resolvePreferredProviderForAuthChoice(authChoice),
|
preferredProvider:
|
||||||
|
customPreferredProvider ?? resolvePreferredProviderForAuthChoice(authChoice),
|
||||||
});
|
});
|
||||||
if (modelSelection.model) {
|
if (modelSelection.model) {
|
||||||
nextConfig = applyPrimaryModel(nextConfig, modelSelection.model);
|
nextConfig = applyPrimaryModel(nextConfig, modelSelection.model);
|
||||||
|
|||||||
Reference in New Issue
Block a user