Compare commits

..

98 Commits

Author SHA1 Message Date
CaIon
a9e03e6172 🔧 refactor(user_cache): remove unused JSON import 2025-07-10 15:12:57 +08:00
CaIon
cb16bf552e Merge branch 'alpha' into refactor_error
# Conflicts:
#	controller/channel.go
#	middleware/distributor.go
#	model/channel.go
#	model/user.go
#	model/user_cache.go
#	relay/common/relay_info.go
2025-07-10 15:11:55 +08:00
CaIon
98952198bb refactor: Introduce standardized API error
This commit refactors the application's error handling mechanism by introducing a new standardized error type, `types.NewAPIError`. It also renames common JSON utility functions for better clarity.

Previously, internal error handling was tightly coupled to the `dto.OpenAIError` format. This change decouples the internal logic from the external API representation.

Key changes:
- A new `types.NewAPIError` struct is introduced to serve as a canonical internal representation for all API errors.
- All relay adapters (OpenAI, Claude, Gemini, etc.) are updated to return `*types.NewAPIError`.
- Controllers now convert the internal `NewAPIError` to the client-facing `OpenAIError` format at the API boundary, ensuring backward compatibility.
- Channel auto-disable/enable logic is updated to use the new standardized error type.
- JSON utility functions are renamed to align with Go's standard library conventions (e.g., `UnmarshalJson` -> `Unmarshal`, `EncodeJson` -> `Marshal`).
2025-07-10 15:02:40 +08:00
CaIon
338e914a60 🔧 refactor(channel): add validation for CHANNEL_TEST_FREQUENCY in automatic tests 2025-07-10 11:31:07 +08:00
Calcium-Ion
78fb457765 Merge pull request #1346 from QuantumNous/fix-ability
 feat(ability): enhance FixAbility function
2025-07-08 18:38:35 +08:00
CaIon
8759ef012f feat(ability): enhance FixAbility function 2025-07-08 18:33:32 +08:00
Calcium-Ion
f8d67a62a2 Merge pull request #1334 from duyazhe/fix-baidu-bug
修复了百度请求时候需要传appid的bug
2025-07-07 14:51:23 +08:00
Xyfacai
efb98854b2 Merge pull request #1341 from QuantumNous/refactor/log-params
refactor: log params and channel params
2025-07-07 14:29:16 +08:00
Xiangyuan-liu
7b29f429ee refactor: log params and channel params
refactor: log params and channel params
2025-07-07 14:26:37 +08:00
CaIon
265c7d93a2 🔧 refactor(adaptor): update HTTP referer to new API domain 2025-07-07 12:36:04 +08:00
duyazhe
ce57ad3570 Update adaptor.go 2025-07-07 09:57:20 +08:00
t0ng7u
0e6b608f91 🎨 feat(ui): dynamic multi-key controls in Channel editor
Summary
1. Load `channel_info` when editing:
   • Detect if the channel is in multi-key mode (`is_multi_key`).
   • Auto-initialize `batch`, `multiToSingle`, and `multiKeyMode` from backend data.

2. Visibility logic
   • Creation page: “Batch create / Multi-key mode” always available.
   • Edit page: show these controls **only when** the channel itself is multi-key.

3. State consistency
   • `multi_key_mode` added to `inputs`; `setValues(inputs)` now preserves the user’s selection.

Result
Single-key channels no longer display irrelevant “key aggregation” options, while multi-key channels open with the correct defaults, providing a cleaner and more accurate editing experience.
2025-07-07 01:53:19 +08:00
t0ng7u
f1856fe4d2 🔧 **fix(model, controller): robust serialization for ChannelInfo & correct Vertex-AI key storage**
Summary
1. **model/channel.go**
   • Replaced the pointer-only `Value()` with a value-receiver implementation
   • GORM can now marshal both `ChannelInfo` and `*ChannelInfo`, eliminating
     `unsupported type model.ChannelInfo` runtime error.

2. **controller/channel.go**
   • Refactored `getVertexArrayKeys()` – every element is now
     - passed through `json.Marshal` when not already a string
     - trimmed & validated before insertion
   • Guarantees each service-account key is persisted as a **pure JSON string**
     instead of the previous `map[...]` dump.

Result
• Channel creation / update succeeds without SQL driver errors.
• Vertex-AI batch & multi-key uploads are stored in canonical JSON, ready for
  downstream SDKs to consume.
2025-07-07 01:31:41 +08:00
t0ng7u
870cdd5a56 feat(channel): Robust Vertex AI batch-key upload & stable multi-key settings
Summary
-------
1. Vertex AI JSON key upload
   • Accept multiple `.json` files (drag & drop / click).
   • Parse each `fileInstance`; build valid key array.
   • Malformed files are skipped and collected into **one** toast message.
   • Upload list now仅displays valid files; form state (`vertex_files`) 同步保持.
   • On *Submit* keys are re-parsed to prevent async timing loss.

2. Multi-key mode stability
   • Added `multi_key_mode: "random"` to initial form values.
   • `Form.Select` becomes fully controlled (`value`/`onChange`).
   • Toggling the “multi-key mode” checkbox writes default / removes field, so
     `setValues(inputs)` no longer clears the user’s choice.

3. UX / compatibility tweaks
   • Preserve uploaded files when editing any other field.
   • Use `fileInstance` only – compatible with legacy Semi Upload API.
   • Removed redundant `limit` prop on `Form.Upload`.
   • Aggregated error handling avoids toast spam.

Result
------
Channel creation/update now supports:
• Reliable batch import of Vertex AI service-account keys.
• Consistent retention of multi-key strategy (`random` / `polling`).
• Cleaner, user-friendly error feedback.
2025-07-07 01:12:01 +08:00
duyazhe
9282f1d893 修复了百度请求时候需要传appid的bug 2025-07-06 23:09:49 +08:00
CaIon
9546a47f2b feat(tokens): add cherryConfig support for URL generation and base64 encoding 2025-07-06 20:56:09 +08:00
CaIon
f0f277dc2a 🔧 refactor(auth, channel, context): improve context setup and validation for multi-key channels 2025-07-06 12:37:56 +08:00
CaIon
b695e67154 🔧 refactor(channel): replace common.ChannelTypeVertexAi with constant.ChannelTypeVertexAi 2025-07-06 10:35:29 +08:00
CaIon
fa2cd85007 Merge branch 'alpha' into mutil_key_channel
# Conflicts:
#	controller/channel.go
#	docker-compose.yml
#	web/src/components/table/ChannelsTable.js
#	web/src/pages/Channel/EditChannel.js
2025-07-06 10:33:48 +08:00
CaIon
8073cbd96a 🔧 refactor(model): change user group retrieval to non-strict mode 2025-07-06 10:23:38 +08:00
CaIon
5eba2f1d61 🔧 refactor(model): update context key retrieval to use token group instead of user group 2025-07-05 16:40:49 +08:00
Calcium-Ion
5ec421d8e6 Merge pull request #1321 from iszcz/main
支持Midjourney视频任务和图片编辑
2025-07-05 15:28:33 +08:00
CaIon
1e25bf700d Merge remote-tracking branch 'origin/alpha' into alpha 2025-07-05 14:14:48 +08:00
CaIon
30fb349d91 feat(endpoint types): add support for image generation models in endpoint type handling 2025-07-05 14:14:40 +08:00
t0ng7u
d40fb68500 📊 feat(detail): add model consumption trend & call ranking charts
Introduce two new visualizations to the “Model Data Analysis” panel:

1. Model Consumption Trend (line chart)
   • Added `spec_model_line` state and legend support.
   • Calculates per-model counts over time and updates via `updateChartData`.
2. Model Call Ranking (bar chart)
   • Added `spec_rank_bar` state with `seriesField` and legend enabled.
   • Ranks models by total call count.

Additional changes:
• Extended tab navigation with two new `TabPane`s and adjusted chart rendering logic.
• Swapped icons/texts to match new chart purposes.
• Reused existing color mapping to ensure consistent palette.

No breaking changes; UI now offers richer insights into model usage patterns.
2025-07-05 00:37:05 +08:00
t0ng7u
3049ad47e5 🔢 feat(user-edit): replace add-quota input with Semi-UI InputNumber
Summary:
• Imported InputNumber from @douyinfe/semi-ui.
• Swapped plain Input for InputNumber in “Add Quota” modal.
• Added UX tweaks: full-width styling, showClear, step = 500 000.
• Initialized addQuotaLocal to an empty string so the field starts blank.
• Adjusted state handling and kept quota calculation logic unchanged.

This improves numeric input accuracy and overall user experience without breaking existing functionality.
2025-07-05 00:03:12 +08:00
t0ng7u
8945a3a2dd 🖼️ style(RatioSync): remove the useless rounded-full style 2025-07-04 23:49:34 +08:00
t0ng7u
d191eef657 🐛 fix: fix the header height calculation issue in the custom HTML styles on the homepage 2025-07-04 23:42:46 +08:00
CaIon
6ac7878863 🔧 refactor(endpoint types): comment out unused endpoint types in constants 2025-07-04 15:53:46 +08:00
t0ng7u
c0a23ffa62 🎨 refactor(EditTagModal): tidy imports & enhance state-sync on open
Motivation
• Remove unused UI components to keep the bundle lean and silence linter warnings.
• Ensure every time the side-sheet opens it reflects the latest tag data, avoiding stale form values (e.g., model / group mismatches).

Key Changes
1. UI Imports
   – Dropped `Input`, `Select`, `TextArea` from `@douyinfe/semi-ui` (unused in Form-based version).
2. State Reset & Form Sync
   – On `visible` or `tag` change:
     • Refresh model & group options.
     • Reset `inputs` to clean defaults (`originInputs`) carrying the current `tag`.
     • Pre-fill Form through `formApiRef` to keep controlled fields aligned.
3. Minor Cleanup
   – Added inline comment clarifying local state reset purpose.

Result
Opening the “Edit Tag” side-sheet now always displays accurate data without residual selections, and build output is cleaner due to removed dead imports.
2025-07-04 06:14:15 +08:00
t0ng7u
7d691f362d refactor(EditChannel&EditToken): refactor Channel & Token edit pages with Semi Form and UX enhancements
Overview
• Migrated both `EditChannel.js` and `EditToken.js` to fully leverage Semi UI `Form.*` components, removing legacy `Input/Select/TextArea` + manual labels.
• Unified data-loading strategy: when the drawer becomes visible we load (or reset) data via `props.visible + id` effect and `formApi.setValues()`, guaranteeing fields are always populated; form resets on close.
• Fixed blank-form bug when opening the same record twice.

Key improvements
1. Validation
   • `type`, `models` always required.
   • `key` required only while creating (not on edit).
2. Batch key creation
   • Checkbox moved into `extraText`; hidden when editing or when channel type = 41.
3. Layout & UI
   • `Row / Col` (12 + 12) for “Priority” and “Weight”.
   • Placeholders revised; model selector now shows creation hint; removed obsolete banner.
   • Help / extraText used for long hints, template buttons (`model_mapping`, `status_code_mapping`, `param_override`, etc.), and API address notice.
   • Added `showClear`, `min`, rounded card class names for consistency.
4. Reusable helpers
   • `batchAllowed`, `batchExtra` utilities.
   • `getInitValues()` + centralized `inputs`→form synchronization.
5. Token editor aligned to the same pattern (`props.visiable` watcher).

Result
Cleaner code, consistent UX, instant field population on every open, and clearer validation/error feedback across both editors.
2025-07-04 05:36:10 +08:00
t0ng7u
bf577b8937 🔌 feat(api): extend endpoint type support & expose in pricing UI
* backend
  - constant/endpoint_type.go
    • Add EndpointTypeMidjourney, EndpointTypeSuno, EndpointTypeKling, EndpointTypeJimeng.
  - common/endpoint_type.go
    • Map Midjourney / MidjourneyPlus, SunoAPI, Kling, Jimeng channel types to the new endpoint types.

* frontend
  - ModelPricing.js
    • Add “Supported Endpoint Type” column.
    • Implement renderSupportedEndpoints with `stringToColor` for consistent tag colors.

These changes allow `/api/pricing` and model lists to return accurate
`supported_endpoint_types` covering all non-OpenAI providers and display
them clearly in the UI.

No breaking changes.
2025-07-04 03:15:34 +08:00
Calcium-Ion
819290c9b8 Merge pull request #1314 from vickyyd/main
修复使用gemini-balance作为上游时,测试gemini2.5pro模型时出现的错误问题
2025-07-03 15:53:32 +08:00
CaIon
22e8b46159 feat: make TopN field in RerankRequest optional in JSON serialization 2025-07-03 15:45:32 +08:00
CaIon
76b8cc1168 feat: add pull request template and enforce branching strategy in workflow 2025-07-03 13:33:50 +08:00
Calcium-Ion
fce07325b9 Merge pull request #1325 from feitianbubu/pr/fix-ali-embedding-lost-prompt-token
fix: ali embedding lose prompt_tokens
2025-07-03 13:26:51 +08:00
Calcium-Ion
123862d41c Merge pull request #1326 from QuantumNous/refactor_constant
 feat: refactor environment variable initialization
2025-07-03 13:18:41 +08:00
CaIon
7e298f8ad1 feat: refactor environment variable initialization and introduce new constant types for API and context keys 2025-07-03 13:10:25 +08:00
IcedTangerine
34aca14858 Merge pull request #1309 from feitianbubu/pr/alpha/video-action-constant2
feat: video action to constant
2025-07-02 15:50:23 +08:00
skynono
6b1f94348a fix: ali embedding lose prompt_tokens 2025-07-02 15:12:02 +08:00
CaIon
4322037639 🐛 fix: correct validation logic for redemption name input in EditRedemption component 2025-07-02 10:28:57 +08:00
CaIon
ae11f88595 feat: increase Node.js memory limit in macOS release workflow 2025-07-01 13:23:29 +08:00
CaIon
389a4c3e4c Merge branch 'main' into alpha 2025-07-01 13:15:47 +08:00
CaIon
efb691e6c2 Merge remote-tracking branch 'origin/alpha' into alpha 2025-07-01 13:14:40 +08:00
CaIon
53e3b35437 feat: enhance JWT exchange process with proxy support. (close #1087) 2025-07-01 13:14:24 +08:00
CaIon
eb265a55e1 feat: enhance environment configuration and resource initialization 2025-07-01 13:13:30 +08:00
Calcium-Ion
950f7d214f Merge pull request #1322 from feitianbubu/pr/jimeng-key-delimiter
feat: jimeng apiKey format to use `|` delimiter
2025-07-01 10:44:19 +08:00
skynono
6bd2316d9c feat: jimeng apiKey format to use | delimiter 2025-07-01 10:35:29 +08:00
iszcz
660180ea1b 支持Midjourney视频任务和图片编辑 2025-06-30 22:31:12 +08:00
kikii16
efc8457770 修复gemini-balance测试gemini2.5pro的错误问题 2025-06-29 13:36:19 +08:00
同語
9b8b982d8a 🐛 fix: ratelimit style error
Merge pull request #1301 from tbphp/fix_ratelimit_style
2025-06-29 02:34:06 +08:00
t0ng7u
e6949e611a style: change the border radius of most components from full to lg size 2025-06-29 02:32:09 +08:00
t0ng7u
cffade7210 🤯style: remove useless card headerStyle 2025-06-29 00:11:15 +08:00
CaIon
6b9237f868 🐛 fix: refactor JSON unmarshalling across multiple handlers to use UnmarshalJson and UnmarshalJsonStr for consistency
This update replaces instances of DecodeJson and DecodeJsonStr with UnmarshalJson and UnmarshalJsonStr in various relay handlers, enhancing code consistency and clarity in JSON processing. The changes improve maintainability and align with recent refactoring efforts in the codebase.
2025-06-28 00:02:07 +08:00
CaIon
1f4cf07b63 🐛 fix: refactor response body handling in multiple relay handlers to utilize IOCopyBytesGracefully 2025-06-27 23:35:56 +08:00
skynono
59a1f4c900 feat: video action to constant 2025-06-27 23:19:34 +08:00
CaIon
0a04a76c71 🐛 fix: refactor JSON encoding and decoding in OpenAI handlers for improved consistency 2025-06-27 22:45:36 +08:00
CaIon
9e6bc518cc 🐛 fix: refactor OaiStreamHandler to improve last response handling and streamline response body closure 2025-06-27 22:44:20 +08:00
CaIon
bfb6fbbac9 🐛 fix: update hardcoded completion model ratio for gemini-2.5-flash-lite 2025-06-27 22:36:23 +08:00
CaIon
9c08d8cf20 feat: introduce IOCopyBytesGracefully function for streamlined response body handling
This update adds the IOCopyBytesGracefully function to the common package, which simplifies the process of copying response bodies in the OpenAI handlers. It enhances error handling and ensures proper resource management by encapsulating the logic for setting headers and writing response data. The OpenAI handlers have been refactored to utilize this new function, improving code clarity and maintainability.
2025-06-27 22:36:12 +08:00
CaIon
281054ff4c 🐛 fix: replace direct response body closure with common.CloseResponseBodyGracefully for improved error handling
This update standardizes the closure of HTTP response bodies across multiple stream handlers, enhancing error management and resource cleanup. The new method ensures that any errors during closure are handled gracefully, preventing potential request termination issues.
2025-06-27 21:40:36 +08:00
CaIon
3002659f47 feat: add CloseResponseBodyGracefully function to handle HTTP response body closure 2025-06-27 21:37:13 +08:00
Calcium-Ion
647f8d7958 Merge pull request #1274 from feitianbubu/feat/add-channel-jimeng
feat: 支持即梦视频渠道
2025-06-27 21:16:50 +08:00
CaIon
5d289d38ba 🐛 fix: handle response body errors more gracefully in OpenAI handler
Changes:
- Replaced error returns with logging for response body copy failures to prevent early termination of the request.
- Ensured that the response body is closed properly after writing to the client.
- Added comments to clarify the handling of billing and error reporting after the response has been sent.

This update improves error handling and maintains resource management in the OpenAI handler.
2025-06-27 21:13:21 +08:00
skynono
05ea0dd54f feat: add video channel jimeng 2025-06-27 17:08:20 +08:00
CaIon
1dad04ec09 feat: add Function and Container fields to ResponsesToolsCall struct #1305 2025-06-27 16:56:54 +08:00
Xyfacai
2171117c53 Merge pull request #1291 from feitianbubu/pr/add-origin-kling-api
feat: add origin kling api
2025-06-27 16:08:03 +08:00
Xyfacai
d389befc9e Merge pull request #1298 from xiangyuanliu/feat/page-format
feat: 优化分页组件
2025-06-27 15:55:30 +08:00
t0ng7u
3ced5ff144 chore: Improve channel creation UX: defer "Fetch Model List" action until after creation
Previously, the "Fetch Model List" button was visible in the channel-creation view even though
it only functions once a channel record exists, leading to user confusion.

Changes introduced:
• Render the "Fetch Model List" button only when editing an existing channel (`isEdit === true`).
• Display an informational Banner in creation mode to remind users that the upstream model list
  can be fetched after the channel has been created.
• Refactored JSX to apply the above conditional rendering without altering existing logic.

This update streamlines the creation workflow and sets clearer expectations for users.
2025-06-27 10:08:44 +08:00
t0ng7u
38d3ab5acf 💄refactor: enhance EditUser and AddUser form validation & UX
Changes in `web/src/pages/User/EditUser.js`:
• Added `rules` to
  – `Form.Select group`: now required with error “Please select group”.
  – `Form.InputNumber quota`: now required with error “Please enter quota”.
• Added `step={500000}` to quota `InputNumber` for quicker numeric input.
• Replaced invalid `readonly` with React-correct `readOnly`, and added descriptive placeholders for all binding-info fields (GitHub/OIDC/WeChat/Email/Telegram).
• Removed unused `downloadTextAsFile` import.

These updates tighten form validation, improve data entry ergonomics, and restore clear read-only indicators for third-party bindings.
2025-06-27 09:44:18 +08:00
t0ng7u
ab32e15a86 🐛 fix(redemptions-table): correct initial page index and pagination state
Summary:
The redemption list occasionally displayed an invalid range such as “Items -9 - 0” and failed to highlight page 1 after a refresh. This was caused by the table being initialized with `currentPage = 0`.

Changes:
• update `useEffect` to load data starting from page 1 instead of page 0
• refactor `loadRedemptions` to accept `page` (default 1) and sanitize backend‐returned pages (`<= 0` coerced to 1)
• keep other logic unchanged

Impact:
Pagination text and page selection now show correct values on first load or refresh, eliminating negative ranges and ensuring the first page is properly highlighted.
2025-06-27 07:42:04 +08:00
t0ng7u
25e17b95d5 🐛 fix(redemptions-table): show loading indicator while refetching data
Previously, the table did not enter the loading state after performing actions such as deleting, enabling, or disabling a redemption code. This caused a brief period where the UI appeared unresponsive while awaiting the backend response.

Changes made:
• Added `setLoading(true)` at the beginning of `loadRedemptions` to activate the loading spinner whenever data is (re)fetched.
• Added an explanatory code comment to clarify the intent.

This improves user experience by clearly indicating that the system is processing and prevents confusion during data refresh operations.
2025-06-27 07:29:28 +08:00
t0ng7u
d07224e658 🎁 refactor(ui/redemption): migrate EditRedemption page to Semi Form & enhance UX
SUMMARY
• Re-implemented `web/src/pages/Redemption/EditRedemption.js` with Semi Form components, removing legacy local-state handling.
• Added `formApiRef` for centralized control; external “Submit” button now triggers `formApi.submitForm()`.
• Replaced `Input/AutoComplete/DatePicker` etc. with `<Form.*>` fields, leveraging built-in validation & accessibility.
• Field validations:
  – `name` (create only), `quota`, `count` → required with localized messages.
• Expiration-time flow:
  – Default value `null` (no more 1970-01-01).
  – When loading data, convert 0 → null, timestamp → Date.
  – On submit, Date → unix seconds; empty → 0.
• Responsive grid layout (`Row/Col`) for tidy alignment.
• Added helpful `showClear` & full-width styling for inputs; quota presets retained.
• Cleaned unused imports & handlers; fixed linter issues.

RESULT
The Redemption form now benefits from higher performance, clearer validation, and a cleaner codebase consistent with Semi Design best practices.
2025-06-27 07:25:46 +08:00
t0ng7u
aa15d45a3d refactor(ui/token): migrate EditToken page to Semi Form API and polish UX
SUMMARY
• Re-implemented `EditToken.js` with Semi Form components, eliminating manual state handling and reducing re-renders.
• Added grid-based layout; “Expiration Time” selector now sits inline with quick-set buttons for consistent alignment on desktop & mobile.
• Introduced dedicated “Quota”, “Access”, “Model Limits”, and “Group” cards for clearer field grouping.
• Reworked model-limit interaction: single multi-select list replaces checkbox toggle; backend flag `model_limits_enabled` is now inferred automatically.
• Applied required validation rules to critical fields (`name`, `remain_quota`, `group`, `expired_time`, `tokenCount`) with localized messages.
• Enabled dynamic option loading for models & groups; default auto-group honoured.
• Added unlimited-quota switch, quota presets, and helpful extraText/tooltips.
• Removed obsolete `handleInputChange` & `setUnlimitedQuota` helpers; formApi now manages all data flow.
• Cleaned imports (e.g., dropped unused `IconUserGroup`), fixed linter errors, and updated submit logic to use `formApi.submitForm()`.

RESULT
The token creation/editing experience is faster, more accessible, and easier to maintain, fully aligned with Semi Design best practices.
2025-06-26 22:58:25 +08:00
tbphp
c6c68da0b5 fix: ratelimit style error 2025-06-26 21:32:05 +08:00
t0ng7u
1a0aac81df 🎨 style: remove all prefix icons to simplify the layout of the sidesheet component 2025-06-26 16:36:36 +08:00
t0ng7u
39cb45c11c 🎨 style: unify card header UI, switch to Avatar icons & remove oversized props
Summary
• Replaced gradient header blocks with compact, neutral headers wrapped in `Avatar` across the following pages:
  - Channel / EditChannel.js
  - Channel / EditTagModal.js
  - Redemption / EditRedemption.js
  - Token / EditToken.js
  - User / EditUser.js
  - User / AddUser.js

Details
1. Added `Avatar` import and substituted raw icon elements, assigning semantic colors (`blue`, `green`, `purple`, `orange`, etc.) and consistent 16 px icons for a cleaner look.
2. Removed gradient backgrounds, decorative “blur-ball” shapes, and extra paddings from header containers to achieve a tight, flat design.
3. Stripped all `size="large"` attributes from `Button`, `Input`, `Select`, `DatePicker`, `AutoComplete`, and `Avatar` components, allowing default sizing for better visual density.
4. Eliminated redundant `bodyStyle` background overrides in some `SideSheet` components.
5. No business logic touched; all changes are purely presentational.

Result
The editing and creation dialogs now share a unified, compact style consistent with the latest design language, improving readability and user experience without altering functionality.
2025-06-26 16:05:13 +08:00
t0ng7u
05d9aa53ef 🔒 style: Hide registration link when Self-Use Mode is enabled
• Add conditional rendering (`!status.self_use_mode_enabled`) to LoginForm
• Suppress “Don't have an account? Register” CTA in self-hosted scenarios
• Keeps UI clean and prevents unintended user sign-ups under self-use mode
• No impact on regular multi-user deployments
2025-06-26 04:29:44 +08:00
t0ng7u
86f374df58 🐛 fix(auth): prevent duplicate “session expired” toast on login
Login Form used to display the message “未登录或登录已过期,请重新登录” twice
because the `useEffect` that inspects the `expired` query parameter was
re-executed on every re-render (e.g. language change or React StrictMode’s
double-mount in development).

### What's changed
• **LoginForm.js** – `useEffect` that shows the toast now has an empty
  dependency array so it runs only once on initial mount.
• Reviewed **PasswordResetConfirm.js**, **PasswordResetForm.js** and
  **RegisterForm.js** and confirmed they do not contain the same issue;
  no changes were required.

### Impact
Users now see the “session expired” notification exactly once, removing
confusion and improving the overall UX.
2025-06-26 03:51:19 +08:00
t0ng7u
6935260bf0 🧶style(TokensTable): add IconDelete in Delete selected token button 2025-06-25 23:23:59 +08:00
t0ng7u
f0d888729b 🐛 fix(auth): restore proper state & context destructuring in Login- and Register-forms
Why
Clicking the “Continue” button on the login page no longer triggered the submission logic. The issue was introduced when `useState`/`useContext` hooks were destructured incorrectly, breaking the setter reference and omitting required values.

What’s changed
• **LoginForm.js**
  – Re-added setter in `useSearchParams` (`[searchParams, setSearchParams]`).
  – Corrected order of destructuring for `inputs` so `username`/`password` are available after hooks.
  – Switched `useContext` to `[userState, userDispatch]` for consistency.

• **RegisterForm.js**
  – Adopted `[userState, userDispatch]` from `UserContext` to mirror LoginForm and retain full state access.

Outcome
Login button now successfully invokes `handleSubmit`, and both auth components have consistent, fully-featured hook destructuring, preventing runtime errors and ensuring future state usage is straightforward.
2025-06-25 23:13:55 +08:00
t0ng7u
6d7d4292ef 💫 feat(ui): introduce dispersed blur-ball background to all auth views
This commit refreshes the visual design of the authentication pages and aligns them with the Home banner style.

Details
• LoginForm.js / RegisterForm.js / PasswordResetForm.js / PasswordResetConfirm.js
  – Wrap top-level container with `relative overflow-hidden` to provide a positioning context.
  – Inject two decorative blur balls:
    ▸ Indigo ball on the top-right (`blur-ball-indigo`).
    ▸ Teal ball on the middle-left (`blur-ball-teal`).
  – Disabled the default X-axis transform on the indigo ball to keep the ball anchored to the corner.
  – Removed redundant `mt-[64px]` from the outer container and shifted it to the inner wrapper to maintain vertical rhythm without affecting the background placement.

Result
The auth screens now feature subtle, non-intrusive atmospheric gradients in the top-right and mid-left corners, offering a cohesive look & feel across the application without obstructing the main content.
2025-06-25 22:57:04 +08:00
t0ng7u
fcefac9dbe 🐛 fix(auth): prevent initial render flicker & clean up state usage
• LoginForm / RegisterForm now initialise `status` directly from localStorage,
  avoiding a post-mount state update that caused a UI flash between OAuth
  options and email/username forms.

• Move Turnstile configuration into a dedicated effect that depends on
  `status`, ensuring setState is not called during rendering.

• Remove unused `setStatus` setter to resolve ESLint “declared but never read”
  warnings.

• Minor refactors: reorder hooks, de-duplicate navigate/context variables and
  streamline state destructuring for improved readability.
2025-06-25 22:46:11 +08:00
t0ng7u
ad5f731b20 🍭style: add mt-[64px] in class auth componets 2025-06-25 22:21:14 +08:00
Xiangyuan-liu
76da067d40 feat: 优化分页组件 2025-06-25 18:42:19 +08:00
CaIon
0689670698 🔧 fix(xinference): update Document type to 'any' for flexibility
- Changed the type of `Document` in `XinRerankResponseDocument` from `string` to `any` to accommodate various data types.
- Updated the `RerankHandler` to handle `Document` as `any`, ensuring proper assignment based on its actual type.

These modifications enhance the handling of document data, allowing for greater versatility in response structures.
2025-06-25 18:04:34 +08:00
t0ng7u
5a6f32c392 🎨 style(ui): refactor Tabs in ModelPricing to use native Semi UI styling
• Removed the custom `renderArrow` helper and its `Dropdown`-based arrow navigation, simplifying the component logic.
• Switched the `<Tabs>` component to rely on Semi UI’s built-in behaviour (no more `renderArrow` override).
• Kept `type="card"` and `collapsible` props for consistent visual appearance while still using the default style.
• Eliminated the now-unused `Dropdown` import.

This cleanup reduces bespoke UI code, makes future maintenance easier, and keeps the interface consistent with the rest of the application.
2025-06-25 15:40:27 +08:00
t0ng7u
d6276c4692 Merge remote-tracking branch 'origin/alpha' into alpha 2025-06-25 15:26:59 +08:00
t0ng7u
29a44eb7ae feat(homepage): enhance banner visuals & UX
• Added read-only Base URL input that shows `status.server_address` (fallback `window.location.origin`) and copies value on click.
• Embedded `ScrollList` as input `suffix`; auto-cycles common endpoints every 3 s and allows manual selection.
• Introduced `API_ENDPOINTS` array in `web/src/constants/common.constant.js` for centralized endpoint management.
• Implemented custom CSS to hide ScrollList wheel indicators / scrollbars for a cleaner look.
• Created two blurred colour spheres behind the banner (`blur-ball-indigo`, `blur-ball-teal`) with light-/dark-mode opacity tweaks and lower vertical placement.
• Increased letter-spacing for Chinese heading via conditional `tracking-wide` / `md:tracking-wider` classes to improve readability.
• Misc: updated imports, helper functions, and responsive sizes to keep UI consistent across devices.
2025-06-25 15:26:51 +08:00
CaIon
048a625181 🚀 feat(auth): support new model API paths in authentication and routing
- Updated TokenAuth middleware to handle requests for both `/v1beta/models/` and `/v1/models/`.
- Adjusted distributor middleware to recognize the new model path.
- Enhanced relay mode determination to include the new model path.
- Added route for handling POST requests to `/models/*path`.

These changes ensure compatibility with the new model API structure, improving the overall routing and authentication flow.
2025-06-25 00:19:38 +08:00
t0ng7u
64782027c4 Merge remote-tracking branch 'origin/alpha' into alpha 2025-06-24 18:10:04 +08:00
t0ng7u
277645db50 🔧 style(ui): Inline tag edit action in ChannelsTable
• Removed the dropdown menu previously used for tag-level operations.
• Added a standalone “Edit” button directly after the “Disable All” button, reducing the number of clicks required to edit a tag group.
• Deleted the now-unused `IconEdit` import and its icon reference.

This streamlines the tag management flow and keeps the UI cleaner and more accessible.
2025-06-24 18:09:16 +08:00
skynono
cd2870aebc feat: add origin kling api 2025-06-23 22:36:23 +08:00
CaIon
4a8b7bfa37 temp 2025-06-16 21:01:57 +08:00
CaIon
7403df7e9c feat(channel): enhance channel handling with multi-key support 2025-06-16 02:30:46 +08:00
Apple\Apple
617c8e8f4f feat(channel-ui): support multi-JSON batch creation for Vertex AI & multi-to-single mode
WHY
• Backend (0089157) now accepts structured request `{ mode, channel }`, including new `multi_to_single`.
• Need front-end to upload multiple service-account JSON files and generate correct `channel.key`.
• Improve UX: avoid red “uploadFail” state and offer drag-and-drop UI.

WHAT
1. EditChannel.js
   • Added Upload drag-area with IconBolt; `uploadTrigger="custom"`.
   • `handleJsonFileUpload` reads file, pushes content to `jsonFiles`, returns `{ shouldUpload:false, status:'success' }`.
   • New states: `batch`, `mergeToSingle`, `jsonFiles`.
   • Dynamic mode resolver: `single` | `batch` | `multi_to_single`.
   • Builds `channel.key` as JSON-object whose keys are the raw credential texts.
   • UI:
     – “Batch create” checkbox (new build only).
     – Nested “Merge to single channel (multi-key mode)” checkbox enabled when batch=true.
     – Real-time file count display.

2. Upload UX
   • Drag-and-drop, accepts `.json,application/json`.
   • Custom texts: “Click or drop files here” / “JSON credentials only”.
   • Eliminated mandatory `action` warning (`action="#"`).

3. Misc
   • Included IconBolt import.
   • Safeguard toggles reset logic to prevent stale state.

RESULT
Front-end now fully aligns with enhanced AddChannel API:
• Supports Vertex AI multi JSON batch creation.
• Supports new `multi_to_single` flow.
• Clean user feedback with successful file status.
2025-06-16 01:47:41 +08:00
Apple\Apple
aa793088ed fix(playground): send selected group in API payload
Previously, the Playground UI allowed users to pick a group, but the request body sent to `/pg/chat/completions` did not include this information, so the backend always fell back to the user’s default group.

Changes introduced
• web/src/helpers/api.js – added `group: inputs.group` to the `payload` built in `buildApiPayload`.

Outcome
• Selected group is now transmitted to the backend, enabling proper channel routing and pricing logic based on group ratios.
• Resolves the issue where group selection appeared ineffective in the Playground.
2025-06-16 00:58:47 +08:00
CaIon
0089157b83 feat(channel): enhance AddChannel functionality with structured request handling 2025-06-16 00:37:22 +08:00
208 changed files with 7815 additions and 5829 deletions

View File

@@ -7,6 +7,8 @@
# 调试相关配置
# 启用pprof
# ENABLE_PPROF=true
# 启用调试模式
# DEBUG=true
# 数据库相关配置
# 数据库连接字符串
@@ -41,6 +43,14 @@
# 更新任务启用
# UPDATE_TASK=true
# 对话超时设置
# 所有请求超时时间单位秒默认为0表示不限制
# RELAY_TIMEOUT=0
# 流模式无响应超时时间,单位秒,如果出现空补全可以尝试改为更大值
# STREAMING_TIMEOUT=120
# Gemini 识别图片 最大图片数量
# GEMINI_VISION_MAX_IMAGE_NUM=16
# 会话密钥
# SESSION_SECRET=random_string
@@ -58,8 +68,6 @@
# GET_MEDIA_TOKEN_NOT_STREAM=true
# 设置 Dify 渠道是否输出工作流和节点信息到客户端
# DIFY_DEBUG=true
# 设置流式一次回复的超时时间
# STREAMING_TIMEOUT=120
# 节点类型

View File

@@ -0,0 +1,19 @@
### PR 类型
- [ ] Bug 修复
- [ ] 新功能
- [ ] 文档更新
- [ ] 其他
### PR 是否包含破坏性更新?
- [ ]
- [ ]
### PR 描述
**请在下方详细描述您的 PR包括目的、实现细节等。**
### **重要提示**
**所有 PR 都必须提交到 `alpha` 分支。请确保您的 PR 目标分支是 `alpha`。**

View File

@@ -26,6 +26,7 @@ jobs:
- name: Build Frontend
env:
CI: ""
NODE_OPTIONS: "--max-old-space-size=4096"
run: |
cd web
bun install

View File

@@ -0,0 +1,21 @@
name: Check PR Branching Strategy
on:
pull_request:
types: [opened, synchronize, reopened, edited]
jobs:
check-branching-strategy:
runs-on: ubuntu-latest
steps:
- name: Enforce branching strategy
run: |
if [[ "${{ github.base_ref }}" == "main" ]]; then
if [[ "${{ github.head_ref }}" != "alpha" ]]; then
echo "Error: Pull requests to 'main' are only allowed from the 'alpha' branch."
exit 1
fi
elif [[ "${{ github.base_ref }}" != "alpha" ]]; then
echo "Error: Pull requests must be targeted to the 'alpha' or 'main' branch."
exit 1
fi
echo "Branching strategy check passed."

View File

@@ -27,9 +27,6 @@
<a href="https://goreportcard.com/report/github.com/Calcium-Ion/new-api">
<img src="https://goreportcard.com/badge/github.com/Calcium-Ion/new-api" alt="GoReportCard">
</a>
<a href="https://coderabbit.ai">
<img src="https://img.shields.io/coderabbit/prs/github/QuantumNous/new-api?utm_source=oss&utm_medium=github&utm_campaign=QuantumNous%2Fnew-api&labelColor=171717&color=FF570A&link=https%3A%2F%2Fcoderabbit.ai&label=CodeRabbit+Reviews" alt="CodeRabbit Pull Request Reviews">
</a>
</p>
</div>

71
common/api_type.go Normal file
View File

@@ -0,0 +1,71 @@
package common
import "one-api/constant"
func ChannelType2APIType(channelType int) (int, bool) {
apiType := -1
switch channelType {
case constant.ChannelTypeOpenAI:
apiType = constant.APITypeOpenAI
case constant.ChannelTypeAnthropic:
apiType = constant.APITypeAnthropic
case constant.ChannelTypeBaidu:
apiType = constant.APITypeBaidu
case constant.ChannelTypePaLM:
apiType = constant.APITypePaLM
case constant.ChannelTypeZhipu:
apiType = constant.APITypeZhipu
case constant.ChannelTypeAli:
apiType = constant.APITypeAli
case constant.ChannelTypeXunfei:
apiType = constant.APITypeXunfei
case constant.ChannelTypeAIProxyLibrary:
apiType = constant.APITypeAIProxyLibrary
case constant.ChannelTypeTencent:
apiType = constant.APITypeTencent
case constant.ChannelTypeGemini:
apiType = constant.APITypeGemini
case constant.ChannelTypeZhipu_v4:
apiType = constant.APITypeZhipuV4
case constant.ChannelTypeOllama:
apiType = constant.APITypeOllama
case constant.ChannelTypePerplexity:
apiType = constant.APITypePerplexity
case constant.ChannelTypeAws:
apiType = constant.APITypeAws
case constant.ChannelTypeCohere:
apiType = constant.APITypeCohere
case constant.ChannelTypeDify:
apiType = constant.APITypeDify
case constant.ChannelTypeJina:
apiType = constant.APITypeJina
case constant.ChannelCloudflare:
apiType = constant.APITypeCloudflare
case constant.ChannelTypeSiliconFlow:
apiType = constant.APITypeSiliconFlow
case constant.ChannelTypeVertexAi:
apiType = constant.APITypeVertexAi
case constant.ChannelTypeMistral:
apiType = constant.APITypeMistral
case constant.ChannelTypeDeepSeek:
apiType = constant.APITypeDeepSeek
case constant.ChannelTypeMokaAI:
apiType = constant.APITypeMokaAI
case constant.ChannelTypeVolcEngine:
apiType = constant.APITypeVolcEngine
case constant.ChannelTypeBaiduV2:
apiType = constant.APITypeBaiduV2
case constant.ChannelTypeOpenRouter:
apiType = constant.APITypeOpenRouter
case constant.ChannelTypeXinference:
apiType = constant.APITypeXinference
case constant.ChannelTypeXai:
apiType = constant.APITypeXai
case constant.ChannelTypeCoze:
apiType = constant.APITypeCoze
}
if apiType == -1 {
return constant.APITypeOpenAI, false
}
return apiType, true
}

View File

@@ -193,109 +193,3 @@ const (
ChannelStatusManuallyDisabled = 2 // also don't use 0
ChannelStatusAutoDisabled = 3
)
const (
ChannelTypeUnknown = 0
ChannelTypeOpenAI = 1
ChannelTypeMidjourney = 2
ChannelTypeAzure = 3
ChannelTypeOllama = 4
ChannelTypeMidjourneyPlus = 5
ChannelTypeOpenAIMax = 6
ChannelTypeOhMyGPT = 7
ChannelTypeCustom = 8
ChannelTypeAILS = 9
ChannelTypeAIProxy = 10
ChannelTypePaLM = 11
ChannelTypeAPI2GPT = 12
ChannelTypeAIGC2D = 13
ChannelTypeAnthropic = 14
ChannelTypeBaidu = 15
ChannelTypeZhipu = 16
ChannelTypeAli = 17
ChannelTypeXunfei = 18
ChannelType360 = 19
ChannelTypeOpenRouter = 20
ChannelTypeAIProxyLibrary = 21
ChannelTypeFastGPT = 22
ChannelTypeTencent = 23
ChannelTypeGemini = 24
ChannelTypeMoonshot = 25
ChannelTypeZhipu_v4 = 26
ChannelTypePerplexity = 27
ChannelTypeLingYiWanWu = 31
ChannelTypeAws = 33
ChannelTypeCohere = 34
ChannelTypeMiniMax = 35
ChannelTypeSunoAPI = 36
ChannelTypeDify = 37
ChannelTypeJina = 38
ChannelCloudflare = 39
ChannelTypeSiliconFlow = 40
ChannelTypeVertexAi = 41
ChannelTypeMistral = 42
ChannelTypeDeepSeek = 43
ChannelTypeMokaAI = 44
ChannelTypeVolcEngine = 45
ChannelTypeBaiduV2 = 46
ChannelTypeXinference = 47
ChannelTypeXai = 48
ChannelTypeCoze = 49
ChannelTypeKling = 50
ChannelTypeDummy // this one is only for count, do not add any channel after this
)
var ChannelBaseURLs = []string{
"", // 0
"https://api.openai.com", // 1
"https://oa.api2d.net", // 2
"", // 3
"http://localhost:11434", // 4
"https://api.openai-sb.com", // 5
"https://api.openaimax.com", // 6
"https://api.ohmygpt.com", // 7
"", // 8
"https://api.caipacity.com", // 9
"https://api.aiproxy.io", // 10
"", // 11
"https://api.api2gpt.com", // 12
"https://api.aigc2d.com", // 13
"https://api.anthropic.com", // 14
"https://aip.baidubce.com", // 15
"https://open.bigmodel.cn", // 16
"https://dashscope.aliyuncs.com", // 17
"", // 18
"https://api.360.cn", // 19
"https://openrouter.ai/api", // 20
"https://api.aiproxy.io", // 21
"https://fastgpt.run/api/openapi", // 22
"https://hunyuan.tencentcloudapi.com", //23
"https://generativelanguage.googleapis.com", //24
"https://api.moonshot.cn", //25
"https://open.bigmodel.cn", //26
"https://api.perplexity.ai", //27
"", //28
"", //29
"", //30
"https://api.lingyiwanwu.com", //31
"", //32
"", //33
"https://api.cohere.ai", //34
"https://api.minimax.chat", //35
"", //36
"https://api.dify.ai", //37
"https://api.jina.ai", //38
"https://api.cloudflare.com", //39
"https://api.siliconflow.cn", //40
"", //41
"https://api.mistral.ai", //42
"https://api.deepseek.com", //43
"https://api.moka.ai", //44
"https://ark.cn-beijing.volces.com", //45
"https://qianfan.baidubce.com", //46
"", //47
"https://api.x.ai", //48
"https://api.coze.cn", //49
"https://api.klingai.com", //50
}

41
common/endpoint_type.go Normal file
View File

@@ -0,0 +1,41 @@
package common
import "one-api/constant"
// GetEndpointTypesByChannelType 获取渠道最优先端点类型(所有的渠道都支持 OpenAI 端点)
func GetEndpointTypesByChannelType(channelType int, modelName string) []constant.EndpointType {
var endpointTypes []constant.EndpointType
switch channelType {
case constant.ChannelTypeJina:
endpointTypes = []constant.EndpointType{constant.EndpointTypeJinaRerank}
//case constant.ChannelTypeMidjourney, constant.ChannelTypeMidjourneyPlus:
// endpointTypes = []constant.EndpointType{constant.EndpointTypeMidjourney}
//case constant.ChannelTypeSunoAPI:
// endpointTypes = []constant.EndpointType{constant.EndpointTypeSuno}
//case constant.ChannelTypeKling:
// endpointTypes = []constant.EndpointType{constant.EndpointTypeKling}
//case constant.ChannelTypeJimeng:
// endpointTypes = []constant.EndpointType{constant.EndpointTypeJimeng}
case constant.ChannelTypeAws:
fallthrough
case constant.ChannelTypeAnthropic:
endpointTypes = []constant.EndpointType{constant.EndpointTypeAnthropic, constant.EndpointTypeOpenAI}
case constant.ChannelTypeVertexAi:
fallthrough
case constant.ChannelTypeGemini:
endpointTypes = []constant.EndpointType{constant.EndpointTypeGemini, constant.EndpointTypeOpenAI}
case constant.ChannelTypeOpenRouter: // OpenRouter 只支持 OpenAI 端点
endpointTypes = []constant.EndpointType{constant.EndpointTypeOpenAI}
default:
if IsOpenAIResponseOnlyModel(modelName) {
endpointTypes = []constant.EndpointType{constant.EndpointTypeOpenAIResponse}
} else {
endpointTypes = []constant.EndpointType{constant.EndpointTypeOpenAI}
}
}
if IsImageGenerationModel(modelName) {
// add to first
endpointTypes = append([]constant.EndpointType{constant.EndpointTypeImageGeneration}, endpointTypes...)
}
return endpointTypes
}

View File

@@ -2,10 +2,11 @@ package common
import (
"bytes"
"encoding/json"
"github.com/gin-gonic/gin"
"io"
"one-api/constant"
"strings"
"time"
)
const KeyRequestBody = "key_request_body"
@@ -31,7 +32,7 @@ func UnmarshalBodyReusable(c *gin.Context, v any) error {
}
contentType := c.Request.Header.Get("Content-Type")
if strings.HasPrefix(contentType, "application/json") {
err = json.Unmarshal(requestBody, &v)
err = Unmarshal(requestBody, &v)
} else {
// skip for now
// TODO: someday non json request have variant model, we will need to implementation this
@@ -43,3 +44,45 @@ func UnmarshalBodyReusable(c *gin.Context, v any) error {
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody))
return nil
}
func SetContextKey(c *gin.Context, key constant.ContextKey, value any) {
c.Set(string(key), value)
}
func GetContextKey(c *gin.Context, key constant.ContextKey) (any, bool) {
return c.Get(string(key))
}
func GetContextKeyString(c *gin.Context, key constant.ContextKey) string {
return c.GetString(string(key))
}
func GetContextKeyInt(c *gin.Context, key constant.ContextKey) int {
return c.GetInt(string(key))
}
func GetContextKeyBool(c *gin.Context, key constant.ContextKey) bool {
return c.GetBool(string(key))
}
func GetContextKeyStringSlice(c *gin.Context, key constant.ContextKey) []string {
return c.GetStringSlice(string(key))
}
func GetContextKeyStringMap(c *gin.Context, key constant.ContextKey) map[string]any {
return c.GetStringMap(string(key))
}
func GetContextKeyTime(c *gin.Context, key constant.ContextKey) time.Time {
return c.GetTime(string(key))
}
func GetContextKeyType[T any](c *gin.Context, key constant.ContextKey) (T, bool) {
if value, ok := c.Get(string(key)); ok {
if v, ok := value.(T); ok {
return v, true
}
}
var t T
return t, false
}

57
common/http.go Normal file
View File

@@ -0,0 +1,57 @@
package common
import (
"bytes"
"fmt"
"io"
"net/http"
"github.com/gin-gonic/gin"
)
func CloseResponseBodyGracefully(httpResponse *http.Response) {
if httpResponse == nil || httpResponse.Body == nil {
return
}
err := httpResponse.Body.Close()
if err != nil {
SysError("failed to close response body: " + err.Error())
}
}
func IOCopyBytesGracefully(c *gin.Context, src *http.Response, data []byte) {
if c.Writer == nil {
return
}
body := io.NopCloser(bytes.NewBuffer(data))
// We shouldn't set the header before we parse the response body, because the parse part may fail.
// And then we will have to send an error response, but in this case, the header has already been set.
// So the httpClient will be confused by the response.
// For example, Postman will report error, and we cannot check the response at all.
if src != nil {
for k, v := range src.Header {
// avoid setting Content-Length
if k == "Content-Length" {
continue
}
c.Writer.Header().Set(k, v[0])
}
}
// set Content-Length header manually BEFORE calling WriteHeader
c.Writer.Header().Set("Content-Length", fmt.Sprintf("%d", len(data)))
// Write header with status code (this sends the headers)
if src != nil {
c.Writer.WriteHeader(src.StatusCode)
} else {
c.Writer.WriteHeader(http.StatusOK)
}
_, err := io.Copy(c.Writer, body)
if err != nil {
LogError(c, fmt.Sprintf("failed to copy response body: %s", err.Error()))
}
}

View File

@@ -4,6 +4,7 @@ import (
"flag"
"fmt"
"log"
"one-api/constant"
"os"
"path/filepath"
"strconv"
@@ -24,7 +25,7 @@ func printHelp() {
fmt.Println("Usage: one-api [--port <port>] [--log-dir <log directory>] [--version] [--help]")
}
func LoadEnv() {
func InitEnv() {
flag.Parse()
if *PrintVersion {
@@ -95,4 +96,25 @@ func LoadEnv() {
GlobalWebRateLimitEnable = GetEnvOrDefaultBool("GLOBAL_WEB_RATE_LIMIT_ENABLE", true)
GlobalWebRateLimitNum = GetEnvOrDefault("GLOBAL_WEB_RATE_LIMIT", 60)
GlobalWebRateLimitDuration = int64(GetEnvOrDefault("GLOBAL_WEB_RATE_LIMIT_DURATION", 180))
initConstantEnv()
}
func initConstantEnv() {
constant.StreamingTimeout = GetEnvOrDefault("STREAMING_TIMEOUT", 120)
constant.DifyDebug = GetEnvOrDefaultBool("DIFY_DEBUG", true)
constant.MaxFileDownloadMB = GetEnvOrDefault("MAX_FILE_DOWNLOAD_MB", 20)
// ForceStreamOption 覆盖请求参数强制返回usage信息
constant.ForceStreamOption = GetEnvOrDefaultBool("FORCE_STREAM_OPTION", true)
constant.GetMediaToken = GetEnvOrDefaultBool("GET_MEDIA_TOKEN", true)
constant.GetMediaTokenNotStream = GetEnvOrDefaultBool("GET_MEDIA_TOKEN_NOT_STREAM", true)
constant.UpdateTask = GetEnvOrDefaultBool("UPDATE_TASK", true)
constant.AzureDefaultAPIVersion = GetEnvOrDefaultString("AZURE_DEFAULT_API_VERSION", "2025-04-01-preview")
constant.GeminiVisionMaxImageNum = GetEnvOrDefault("GEMINI_VISION_MAX_IMAGE_NUM", 16)
constant.NotifyLimitCount = GetEnvOrDefault("NOTIFY_LIMIT_COUNT", 2)
constant.NotificationLimitDurationMinute = GetEnvOrDefault("NOTIFICATION_LIMIT_DURATION_MINUTE", 10)
// GenerateDefaultToken 是否生成初始令牌,默认关闭。
constant.GenerateDefaultToken = GetEnvOrDefaultBool("GENERATE_DEFAULT_TOKEN", false)
// 是否启用错误日志
constant.ErrorLogEnabled = GetEnvOrDefaultBool("ERROR_LOG_ENABLED", false)
}

View File

@@ -5,14 +5,18 @@ import (
"encoding/json"
)
func DecodeJson(data []byte, v any) error {
return json.NewDecoder(bytes.NewReader(data)).Decode(v)
func Unmarshal(data []byte, v any) error {
return json.Unmarshal(data, v)
}
func DecodeJsonStr(data string, v any) error {
return DecodeJson(StringToByteSlice(data), v)
func UnmarshalJsonStr(data string, v any) error {
return json.Unmarshal(StringToByteSlice(data), v)
}
func EncodeJson(v any) ([]byte, error) {
func DecodeJson(reader *bytes.Reader, v any) error {
return json.NewDecoder(reader).Decode(v)
}
func Marshal(v any) ([]byte, error) {
return json.Marshal(v)
}

42
common/model.go Normal file
View File

@@ -0,0 +1,42 @@
package common
import "strings"
var (
// OpenAIResponseOnlyModels is a list of models that are only available for OpenAI responses.
OpenAIResponseOnlyModels = []string{
"o3-pro",
"o3-deep-research",
"o4-mini-deep-research",
}
ImageGenerationModels = []string{
"dall-e-3",
"dall-e-2",
"gpt-image-1",
"prefix:imagen-",
"flux-",
"flux.1-",
}
)
func IsOpenAIResponseOnlyModel(modelName string) bool {
for _, m := range OpenAIResponseOnlyModels {
if strings.Contains(modelName, m) {
return true
}
}
return false
}
func IsImageGenerationModel(modelName string) bool {
modelName = strings.ToLower(modelName)
for _, m := range ImageGenerationModels {
if strings.Contains(modelName, m) {
return true
}
if strings.HasPrefix(m, "prefix:") && strings.HasPrefix(modelName, strings.TrimPrefix(m, "prefix:")) {
return true
}
}
return false
}

62
common/page_info.go Normal file
View File

@@ -0,0 +1,62 @@
package common
import (
"github.com/gin-gonic/gin"
"strconv"
)
type PageInfo struct {
Page int `json:"page"` // page num 页码
PageSize int `json:"page_size"` // page size 页大小
StartTimestamp int64 `json:"start_timestamp"` // 秒级
EndTimestamp int64 `json:"end_timestamp"` // 秒级
Total int `json:"total"` // 总条数,后设置
Items any `json:"items"` // 数据,后设置
}
func (p *PageInfo) GetStartIdx() int {
return (p.Page - 1) * p.PageSize
}
func (p *PageInfo) GetEndIdx() int {
return p.Page * p.PageSize
}
func (p *PageInfo) GetPageSize() int {
return p.PageSize
}
func (p *PageInfo) GetPage() int {
return p.Page
}
func (p *PageInfo) SetTotal(total int) {
p.Total = total
}
func (p *PageInfo) SetItems(items any) {
p.Items = items
}
func GetPageQuery(c *gin.Context) (*PageInfo, error) {
pageInfo := &PageInfo{}
err := c.BindQuery(pageInfo)
if err != nil {
return nil, err
}
if pageInfo.Page < 1 {
// 兼容
page, _ := strconv.Atoi(c.Query("p"))
if page != 0 {
pageInfo.Page = page
} else {
pageInfo.Page = 1
}
}
if pageInfo.PageSize == 0 {
pageInfo.PageSize = ItemsPerPage
}
return pageInfo, nil
}

View File

@@ -16,6 +16,10 @@ import (
var RDB *redis.Client
var RedisEnabled = true
func RedisKeyCacheSeconds() int {
return SyncFrequency
}
// InitRedisClient This function is called after init()
func InitRedisClient() (err error) {
if os.Getenv("REDIS_CONN_STRING") == "" {

View File

@@ -1,6 +1,7 @@
package common
import (
"encoding/base64"
"encoding/json"
"math/rand"
"strconv"
@@ -31,16 +32,30 @@ func MapToJsonStr(m map[string]interface{}) string {
return string(bytes)
}
func StrToMap(str string) map[string]interface{} {
func StrToMap(str string) (map[string]interface{}, error) {
m := make(map[string]interface{})
err := json.Unmarshal([]byte(str), &m)
err := Unmarshal([]byte(str), &m)
if err != nil {
return nil
return nil, err
}
return m
return m, nil
}
func IsJsonStr(str string) bool {
func StrToJsonArray(str string) ([]interface{}, error) {
var js []interface{}
err := json.Unmarshal([]byte(str), &js)
if err != nil {
return nil, err
}
return js, nil
}
func IsJsonArray(str string) bool {
var js []interface{}
return json.Unmarshal([]byte(str), &js) == nil
}
func IsJsonObject(str string) bool {
var js map[string]interface{}
return json.Unmarshal([]byte(str), &js) == nil
}
@@ -68,3 +83,15 @@ func StringToByteSlice(s string) []byte {
tmp2 := [3]uintptr{tmp1[0], tmp1[1], tmp1[1]}
return *(*[]byte)(unsafe.Pointer(&tmp2))
}
func EncodeBase64(str string) string {
return base64.StdEncoding.EncodeToString([]byte(str))
}
func GetJsonString(data any) string {
if data == nil {
return ""
}
b, _ := json.Marshal(data)
return string(b)
}

26
constant/README.md Normal file
View File

@@ -0,0 +1,26 @@
# constant 包 (`/constant`)
该目录仅用于放置全局可复用的**常量定义**,不包含任何业务逻辑或依赖关系。
## 当前文件
| 文件 | 说明 |
|----------------------|---------------------------------------------------------------------|
| `azure.go` | 定义与 Azure 相关的全局常量,如 `AzureNoRemoveDotTime`(控制删除 `.` 的截止时间)。 |
| `cache_key.go` | 缓存键格式字符串及 Token 相关字段常量,统一缓存命名规则。 |
| `channel_setting.go` | Channel 级别的设置键,如 `proxy``force_format` 等。 |
| `context_key.go` | 定义 `ContextKey` 类型以及在整个项目中使用的上下文键常量请求时间、Token/Channel/User 相关信息等)。 |
| `env.go` | 环境配置相关的全局变量,在启动阶段根据配置文件或环境变量注入。 |
| `finish_reason.go` | OpenAI/GPT 请求返回的 `finish_reason` 字符串常量集合。 |
| `midjourney.go` | Midjourney 相关错误码及动作(Action)常量与模型到动作的映射表。 |
| `setup.go` | 标识项目是否已完成初始化安装 (`Setup` 布尔值)。 |
| `task.go` | 各种任务(Task)平台、动作常量及模型与动作映射表,如 Suno、Midjourney 等。 |
| `user_setting.go` | 用户设置相关键常量以及通知类型(Email/Webhook)等。 |
## 使用约定
1. `constant` 包**只能被其他包引用**import**禁止在此包中引用项目内的其他自定义包**。如确有需要,仅允许引用 **Go 标准库**
2. 不允许在此目录内编写任何与业务流程、数据库操作、第三方服务调用等相关的逻辑代码。
3. 新增类型时,请保持命名语义清晰,并在本 README 的 **当前文件** 表格中补充说明,确保团队成员能够快速了解其用途。
> ⚠️ 违反以上约定将导致包之间产生不必要的耦合,影响代码可维护性与可测试性。请在提交代码前自行检查。

34
constant/api_type.go Normal file
View File

@@ -0,0 +1,34 @@
package constant
const (
APITypeOpenAI = iota
APITypeAnthropic
APITypePaLM
APITypeBaidu
APITypeZhipu
APITypeAli
APITypeXunfei
APITypeAIProxyLibrary
APITypeTencent
APITypeGemini
APITypeZhipuV4
APITypeOllama
APITypePerplexity
APITypeAws
APITypeCohere
APITypeDify
APITypeJina
APITypeCloudflare
APITypeSiliconFlow
APITypeVertexAi
APITypeMistral
APITypeDeepSeek
APITypeMokaAI
APITypeVolcEngine
APITypeBaiduV2
APITypeOpenRouter
APITypeXinference
APITypeXai
APITypeCoze
APITypeDummy // this one is only for count, do not add any channel after this
)

View File

@@ -1,12 +1,5 @@
package constant
import "one-api/common"
// 使用函数来避免初始化顺序带来的赋值问题
func RedisKeyCacheSeconds() int {
return common.SyncFrequency
}
// Cache keys
const (
UserGroupKeyFmt = "user_group:%d"

109
constant/channel.go Normal file
View File

@@ -0,0 +1,109 @@
package constant
const (
ChannelTypeUnknown = 0
ChannelTypeOpenAI = 1
ChannelTypeMidjourney = 2
ChannelTypeAzure = 3
ChannelTypeOllama = 4
ChannelTypeMidjourneyPlus = 5
ChannelTypeOpenAIMax = 6
ChannelTypeOhMyGPT = 7
ChannelTypeCustom = 8
ChannelTypeAILS = 9
ChannelTypeAIProxy = 10
ChannelTypePaLM = 11
ChannelTypeAPI2GPT = 12
ChannelTypeAIGC2D = 13
ChannelTypeAnthropic = 14
ChannelTypeBaidu = 15
ChannelTypeZhipu = 16
ChannelTypeAli = 17
ChannelTypeXunfei = 18
ChannelType360 = 19
ChannelTypeOpenRouter = 20
ChannelTypeAIProxyLibrary = 21
ChannelTypeFastGPT = 22
ChannelTypeTencent = 23
ChannelTypeGemini = 24
ChannelTypeMoonshot = 25
ChannelTypeZhipu_v4 = 26
ChannelTypePerplexity = 27
ChannelTypeLingYiWanWu = 31
ChannelTypeAws = 33
ChannelTypeCohere = 34
ChannelTypeMiniMax = 35
ChannelTypeSunoAPI = 36
ChannelTypeDify = 37
ChannelTypeJina = 38
ChannelCloudflare = 39
ChannelTypeSiliconFlow = 40
ChannelTypeVertexAi = 41
ChannelTypeMistral = 42
ChannelTypeDeepSeek = 43
ChannelTypeMokaAI = 44
ChannelTypeVolcEngine = 45
ChannelTypeBaiduV2 = 46
ChannelTypeXinference = 47
ChannelTypeXai = 48
ChannelTypeCoze = 49
ChannelTypeKling = 50
ChannelTypeJimeng = 51
ChannelTypeDummy // this one is only for count, do not add any channel after this
)
var ChannelBaseURLs = []string{
"", // 0
"https://api.openai.com", // 1
"https://oa.api2d.net", // 2
"", // 3
"http://localhost:11434", // 4
"https://api.openai-sb.com", // 5
"https://api.openaimax.com", // 6
"https://api.ohmygpt.com", // 7
"", // 8
"https://api.caipacity.com", // 9
"https://api.aiproxy.io", // 10
"", // 11
"https://api.api2gpt.com", // 12
"https://api.aigc2d.com", // 13
"https://api.anthropic.com", // 14
"https://aip.baidubce.com", // 15
"https://open.bigmodel.cn", // 16
"https://dashscope.aliyuncs.com", // 17
"", // 18
"https://api.360.cn", // 19
"https://openrouter.ai/api", // 20
"https://api.aiproxy.io", // 21
"https://fastgpt.run/api/openapi", // 22
"https://hunyuan.tencentcloudapi.com", //23
"https://generativelanguage.googleapis.com", //24
"https://api.moonshot.cn", //25
"https://open.bigmodel.cn", //26
"https://api.perplexity.ai", //27
"", //28
"", //29
"", //30
"https://api.lingyiwanwu.com", //31
"", //32
"", //33
"https://api.cohere.ai", //34
"https://api.minimax.chat", //35
"", //36
"https://api.dify.ai", //37
"https://api.jina.ai", //38
"https://api.cloudflare.com", //39
"https://api.siliconflow.cn", //40
"", //41
"https://api.mistral.ai", //42
"https://api.deepseek.com", //43
"https://api.moka.ai", //44
"https://ark.cn-beijing.volces.com", //45
"https://qianfan.baidubce.com", //46
"", //47
"https://api.x.ai", //48
"https://api.coze.cn", //49
"https://api.klingai.com", //50
"https://visual.volcengineapi.com", //51
}

View File

@@ -1,7 +0,0 @@
package constant
var (
ForceFormat = "force_format" // ForceFormat 强制格式化为OpenAI格式
ChanelSettingProxy = "proxy" // Proxy 代理
ChannelSettingThinkingToContent = "thinking_to_content" // ThinkingToContent
)

View File

@@ -1,11 +1,42 @@
package constant
type ContextKey string
const (
ContextKeyRequestStartTime = "request_start_time"
ContextKeyUserSetting = "user_setting"
ContextKeyUserQuota = "user_quota"
ContextKeyUserStatus = "user_status"
ContextKeyUserEmail = "user_email"
ContextKeyUserGroup = "user_group"
ContextKeyUsingGroup = "group"
ContextKeyOriginalModel ContextKey = "original_model"
ContextKeyRequestStartTime ContextKey = "request_start_time"
/* token related keys */
ContextKeyTokenUnlimited ContextKey = "token_unlimited_quota"
ContextKeyTokenKey ContextKey = "token_key"
ContextKeyTokenId ContextKey = "token_id"
ContextKeyTokenGroup ContextKey = "token_group"
ContextKeyTokenAllowIps ContextKey = "allow_ips"
ContextKeyTokenSpecificChannelId ContextKey = "specific_channel_id"
ContextKeyTokenModelLimitEnabled ContextKey = "token_model_limit_enabled"
ContextKeyTokenModelLimit ContextKey = "token_model_limit"
/* channel related keys */
ContextKeyChannelId ContextKey = "channel_id"
ContextKeyChannelName ContextKey = "channel_name"
ContextKeyChannelCreateTime ContextKey = "channel_create_name"
ContextKeyChannelBaseUrl ContextKey = "base_url"
ContextKeyChannelType ContextKey = "channel_type"
ContextKeyChannelSetting ContextKey = "channel_setting"
ContextKeyChannelParamOverride ContextKey = "param_override"
ContextKeyChannelOrganization ContextKey = "channel_organization"
ContextKeyChannelAutoBan ContextKey = "auto_ban"
ContextKeyChannelModelMapping ContextKey = "model_mapping"
ContextKeyChannelStatusCodeMapping ContextKey = "status_code_mapping"
ContextKeyChannelIsMultiKey ContextKey = "channel_is_multi_key"
/* user related keys */
ContextKeyUserId ContextKey = "id"
ContextKeyUserSetting ContextKey = "user_setting"
ContextKeyUserQuota ContextKey = "user_quota"
ContextKeyUserStatus ContextKey = "user_status"
ContextKeyUserEmail ContextKey = "user_email"
ContextKeyUserGroup ContextKey = "user_group"
ContextKeyUsingGroup ContextKey = "group"
ContextKeyUserName ContextKey = "username"
)

16
constant/endpoint_type.go Normal file
View File

@@ -0,0 +1,16 @@
package constant
type EndpointType string
const (
EndpointTypeOpenAI EndpointType = "openai"
EndpointTypeOpenAIResponse EndpointType = "openai-response"
EndpointTypeAnthropic EndpointType = "anthropic"
EndpointTypeGemini EndpointType = "gemini"
EndpointTypeJinaRerank EndpointType = "jina-rerank"
EndpointTypeImageGeneration EndpointType = "image-generation"
//EndpointTypeMidjourney EndpointType = "midjourney-proxy"
//EndpointTypeSuno EndpointType = "suno-proxy"
//EndpointTypeKling EndpointType = "kling"
//EndpointTypeJimeng EndpointType = "jimeng"
)

View File

@@ -1,9 +1,5 @@
package constant
import (
"one-api/common"
)
var StreamingTimeout int
var DifyDebug bool
var MaxFileDownloadMB int
@@ -17,39 +13,3 @@ var NotifyLimitCount int
var NotificationLimitDurationMinute int
var GenerateDefaultToken bool
var ErrorLogEnabled bool
//var GeminiModelMap = map[string]string{
// "gemini-1.0-pro": "v1",
//}
func InitEnv() {
StreamingTimeout = common.GetEnvOrDefault("STREAMING_TIMEOUT", 120)
DifyDebug = common.GetEnvOrDefaultBool("DIFY_DEBUG", true)
MaxFileDownloadMB = common.GetEnvOrDefault("MAX_FILE_DOWNLOAD_MB", 20)
// ForceStreamOption 覆盖请求参数强制返回usage信息
ForceStreamOption = common.GetEnvOrDefaultBool("FORCE_STREAM_OPTION", true)
GetMediaToken = common.GetEnvOrDefaultBool("GET_MEDIA_TOKEN", true)
GetMediaTokenNotStream = common.GetEnvOrDefaultBool("GET_MEDIA_TOKEN_NOT_STREAM", true)
UpdateTask = common.GetEnvOrDefaultBool("UPDATE_TASK", true)
AzureDefaultAPIVersion = common.GetEnvOrDefaultString("AZURE_DEFAULT_API_VERSION", "2025-04-01-preview")
GeminiVisionMaxImageNum = common.GetEnvOrDefault("GEMINI_VISION_MAX_IMAGE_NUM", 16)
NotifyLimitCount = common.GetEnvOrDefault("NOTIFY_LIMIT_COUNT", 2)
NotificationLimitDurationMinute = common.GetEnvOrDefault("NOTIFICATION_LIMIT_DURATION_MINUTE", 10)
// GenerateDefaultToken 是否生成初始令牌,默认关闭。
GenerateDefaultToken = common.GetEnvOrDefaultBool("GENERATE_DEFAULT_TOKEN", false)
// 是否启用错误日志
ErrorLogEnabled = common.GetEnvOrDefaultBool("ERROR_LOG_ENABLED", false)
//modelVersionMapStr := strings.TrimSpace(os.Getenv("GEMINI_MODEL_MAP"))
//if modelVersionMapStr == "" {
// return
//}
//for _, pair := range strings.Split(modelVersionMapStr, ",") {
// parts := strings.Split(pair, ":")
// if len(parts) == 2 {
// GeminiModelMap[parts[0]] = parts[1]
// } else {
// common.SysError(fmt.Sprintf("invalid model version map: %s", pair))
// }
//}
}

View File

@@ -22,6 +22,8 @@ const (
MjActionPan = "PAN"
MjActionSwapFace = "SWAP_FACE"
MjActionUpload = "UPLOAD"
MjActionVideo = "VIDEO"
MjActionEdits = "EDITS"
)
var MidjourneyModel2Action = map[string]string{
@@ -41,4 +43,6 @@ var MidjourneyModel2Action = map[string]string{
"mj_pan": MjActionPan,
"swap_face": MjActionSwapFace,
"mj_upload": MjActionUpload,
"mj_video": MjActionVideo,
"mj_edits": MjActionEdits,
}

View File

@@ -0,0 +1,8 @@
package constant
type MultiKeyMode string
const (
MultiKeyModeRandom MultiKeyMode = "random" // 随机
MultiKeyModePolling MultiKeyMode = "polling" // 轮询
)

View File

@@ -6,11 +6,15 @@ const (
TaskPlatformSuno TaskPlatform = "suno"
TaskPlatformMidjourney = "mj"
TaskPlatformKling TaskPlatform = "kling"
TaskPlatformJimeng TaskPlatform = "jimeng"
)
const (
SunoActionMusic = "MUSIC"
SunoActionLyrics = "LYRICS"
TaskActionGenerate = "generate"
TaskActionTextGenerate = "textGenerate"
)
var SunoModel2Action = map[string]string{

View File

@@ -1,16 +0,0 @@
package constant
var (
UserSettingNotifyType = "notify_type" // QuotaWarningType 额度预警类型
UserSettingQuotaWarningThreshold = "quota_warning_threshold" // QuotaWarningThreshold 额度预警阈值
UserSettingWebhookUrl = "webhook_url" // WebhookUrl webhook地址
UserSettingWebhookSecret = "webhook_secret" // WebhookSecret webhook密钥
UserSettingNotificationEmail = "notification_email" // NotificationEmail 通知邮箱地址
UserAcceptUnsetRatioModel = "accept_unset_model_ratio_model" // AcceptUnsetRatioModel 是否接受未设置价格的模型
UserSettingRecordIpLog = "record_ip_log" // 是否记录请求和错误日志IP
)
var (
NotifyTypeEmail = "email" // Email 邮件
NotifyTypeWebhook = "webhook" // Webhook
)

View File

@@ -8,6 +8,7 @@ import (
"io"
"net/http"
"one-api/common"
"one-api/constant"
"one-api/model"
"one-api/service"
"one-api/setting"
@@ -341,34 +342,34 @@ func updateChannelMoonshotBalance(channel *model.Channel) (float64, error) {
}
func updateChannelBalance(channel *model.Channel) (float64, error) {
baseURL := common.ChannelBaseURLs[channel.Type]
baseURL := constant.ChannelBaseURLs[channel.Type]
if channel.GetBaseURL() == "" {
channel.BaseURL = &baseURL
}
switch channel.Type {
case common.ChannelTypeOpenAI:
case constant.ChannelTypeOpenAI:
if channel.GetBaseURL() != "" {
baseURL = channel.GetBaseURL()
}
case common.ChannelTypeAzure:
case constant.ChannelTypeAzure:
return 0, errors.New("尚未实现")
case common.ChannelTypeCustom:
case constant.ChannelTypeCustom:
baseURL = channel.GetBaseURL()
//case common.ChannelTypeOpenAISB:
// return updateChannelOpenAISBBalance(channel)
case common.ChannelTypeAIProxy:
case constant.ChannelTypeAIProxy:
return updateChannelAIProxyBalance(channel)
case common.ChannelTypeAPI2GPT:
case constant.ChannelTypeAPI2GPT:
return updateChannelAPI2GPTBalance(channel)
case common.ChannelTypeAIGC2D:
case constant.ChannelTypeAIGC2D:
return updateChannelAIGC2DBalance(channel)
case common.ChannelTypeSiliconFlow:
case constant.ChannelTypeSiliconFlow:
return updateChannelSiliconFlowBalance(channel)
case common.ChannelTypeDeepSeek:
case constant.ChannelTypeDeepSeek:
return updateChannelDeepSeekBalance(channel)
case common.ChannelTypeOpenRouter:
case constant.ChannelTypeOpenRouter:
return updateChannelOpenRouterBalance(channel)
case common.ChannelTypeMoonshot:
case constant.ChannelTypeMoonshot:
return updateChannelMoonshotBalance(channel)
default:
return 0, errors.New("尚未实现")

View File

@@ -11,14 +11,15 @@ import (
"net/http/httptest"
"net/url"
"one-api/common"
"one-api/constant"
"one-api/dto"
"one-api/middleware"
"one-api/model"
"one-api/relay"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"one-api/relay/helper"
"one-api/service"
"one-api/types"
"strconv"
"strings"
"sync"
@@ -29,20 +30,23 @@ import (
"github.com/gin-gonic/gin"
)
func testChannel(channel *model.Channel, testModel string) (err error, openAIErrorWithStatusCode *dto.OpenAIErrorWithStatusCode) {
func testChannel(channel *model.Channel, testModel string) (err error, newAPIError *types.NewAPIError) {
tik := time.Now()
if channel.Type == common.ChannelTypeMidjourney {
if channel.Type == constant.ChannelTypeMidjourney {
return errors.New("midjourney channel test is not supported"), nil
}
if channel.Type == common.ChannelTypeMidjourneyPlus {
return errors.New("midjourney plus channel test is not supported!!!"), nil
if channel.Type == constant.ChannelTypeMidjourneyPlus {
return errors.New("midjourney plus channel test is not supported"), nil
}
if channel.Type == common.ChannelTypeSunoAPI {
if channel.Type == constant.ChannelTypeSunoAPI {
return errors.New("suno channel test is not supported"), nil
}
if channel.Type == common.ChannelTypeKling {
if channel.Type == constant.ChannelTypeKling {
return errors.New("kling channel test is not supported"), nil
}
if channel.Type == constant.ChannelTypeJimeng {
return errors.New("jimeng channel test is not supported"), nil
}
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
@@ -53,7 +57,7 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
strings.HasPrefix(testModel, "m3e") || // m3e 系列模型
strings.Contains(testModel, "bge-") || // bge 系列模型
strings.Contains(testModel, "embed") ||
channel.Type == common.ChannelTypeMokaAI { // 其他 embedding 模型
channel.Type == constant.ChannelTypeMokaAI { // 其他 embedding 模型
requestPath = "/v1/embeddings" // 修改请求路径
}
@@ -95,14 +99,14 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
err = helper.ModelMappedHelper(c, info, nil)
if err != nil {
return err, nil
return err, types.NewError(err, types.ErrorCodeChannelModelMappedError)
}
testModel = info.UpstreamModelName
apiType, _ := constant.ChannelType2APIType(channel.Type)
apiType, _ := common.ChannelType2APIType(channel.Type)
adaptor := relay.GetAdaptor(apiType)
if adaptor == nil {
return fmt.Errorf("invalid api type: %d, adaptor is nil", apiType), nil
return fmt.Errorf("invalid api type: %d, adaptor is nil", apiType), types.NewError(fmt.Errorf("invalid api type: %d, adaptor is nil", apiType), types.ErrorCodeInvalidApiType)
}
request := buildTestRequest(testModel)
@@ -113,45 +117,45 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
priceData, err := helper.ModelPriceHelper(c, info, 0, int(request.MaxTokens))
if err != nil {
return err, nil
return err, types.NewError(err, types.ErrorCodeModelPriceError)
}
adaptor.Init(info)
convertedRequest, err := adaptor.ConvertOpenAIRequest(c, info, request)
if err != nil {
return err, nil
return err, types.NewError(err, types.ErrorCodeConvertRequestFailed)
}
jsonData, err := json.Marshal(convertedRequest)
if err != nil {
return err, nil
return err, types.NewError(err, types.ErrorCodeJsonMarshalFailed)
}
requestBody := bytes.NewBuffer(jsonData)
c.Request.Body = io.NopCloser(requestBody)
resp, err := adaptor.DoRequest(c, info, requestBody)
if err != nil {
return err, nil
return err, types.NewError(err, types.ErrorCodeDoRequestFailed)
}
var httpResp *http.Response
if resp != nil {
httpResp = resp.(*http.Response)
if httpResp.StatusCode != http.StatusOK {
err := service.RelayErrorHandler(httpResp, true)
return fmt.Errorf("status code %d: %s", httpResp.StatusCode, err.Error.Message), err
return err, types.NewError(err, types.ErrorCodeBadResponse)
}
}
usageA, respErr := adaptor.DoResponse(c, httpResp, info)
if respErr != nil {
return fmt.Errorf("%s", respErr.Error.Message), respErr
return respErr, respErr
}
if usageA == nil {
return errors.New("usage is nil"), nil
return errors.New("usage is nil"), types.NewError(errors.New("usage is nil"), types.ErrorCodeBadResponseBody)
}
usage := usageA.(*dto.Usage)
result := w.Result()
respBody, err := io.ReadAll(result.Body)
if err != nil {
return err, nil
return err, types.NewError(err, types.ErrorCodeReadResponseBodyFailed)
}
info.PromptTokens = usage.PromptTokens
@@ -170,8 +174,19 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
consumedTime := float64(milliseconds) / 1000.0
other := service.GenerateTextOtherInfo(c, info, priceData.ModelRatio, priceData.GroupRatioInfo.GroupRatio, priceData.CompletionRatio,
usage.PromptTokensDetails.CachedTokens, priceData.CacheRatio, priceData.ModelPrice, priceData.GroupRatioInfo.GroupSpecialRatio)
model.RecordConsumeLog(c, 1, channel.Id, usage.PromptTokens, usage.CompletionTokens, info.OriginModelName, "模型测试",
quota, "模型测试", 0, quota, int(consumedTime), false, info.UsingGroup, other)
model.RecordConsumeLog(c, 1, model.RecordConsumeLogParams{
ChannelId: channel.Id,
PromptTokens: usage.PromptTokens,
CompletionTokens: usage.CompletionTokens,
ModelName: info.OriginModelName,
TokenName: "模型测试",
Quota: quota,
Content: "模型测试",
UseTimeSeconds: int(consumedTime),
IsStream: false,
Group: info.UsingGroup,
Other: other,
})
common.SysLog(fmt.Sprintf("testing channel #%d, response: \n%s", channel.Id, string(respBody)))
return nil, nil
}
@@ -199,7 +214,7 @@ func buildTestRequest(model string) *dto.GeneralOpenAIRequest {
testRequest.MaxTokens = 50
}
} else if strings.Contains(model, "gemini") {
testRequest.MaxTokens = 300
testRequest.MaxTokens = 3000
} else {
testRequest.MaxTokens = 10
}
@@ -232,15 +247,15 @@ func TestChannel(c *gin.Context) {
}
testModel := c.Query("model")
tik := time.Now()
err, _ = testChannel(channel, testModel)
_, newAPIError := testChannel(channel, testModel)
tok := time.Now()
milliseconds := tok.Sub(tik).Milliseconds()
go channel.UpdateResponseTime(milliseconds)
consumedTime := float64(milliseconds) / 1000.0
if err != nil {
if newAPIError != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),
"message": newAPIError.Error(),
"time": consumedTime,
})
return
@@ -284,17 +299,15 @@ func testAllChannels(notify bool) error {
for _, channel := range channels {
isChannelEnabled := channel.Status == common.ChannelStatusEnabled
tik := time.Now()
err, openaiWithStatusErr := testChannel(channel, "")
err, newAPIError := testChannel(channel, "")
tok := time.Now()
milliseconds := tok.Sub(tik).Milliseconds()
shouldBanChannel := false
// request error disables the channel
if openaiWithStatusErr != nil {
oaiErr := openaiWithStatusErr.Error
err = errors.New(fmt.Sprintf("type %s, httpCode %d, code %v, message %s", oaiErr.Type, openaiWithStatusErr.StatusCode, oaiErr.Code, oaiErr.Message))
shouldBanChannel = service.ShouldDisableChannel(channel.Type, openaiWithStatusErr)
if err != nil {
shouldBanChannel = service.ShouldDisableChannel(channel.Type, newAPIError)
}
if milliseconds > disableThreshold {
@@ -308,7 +321,7 @@ func testAllChannels(notify bool) error {
}
// enable channel
if !isChannelEnabled && service.ShouldEnableChannel(err, openaiWithStatusErr, channel.Status) {
if !isChannelEnabled && service.ShouldEnableChannel(err, newAPIError, channel.Status) {
service.EnableChannel(channel.Id, channel.Name)
}
@@ -340,6 +353,10 @@ func TestAllChannels(c *gin.Context) {
}
func AutomaticallyTestChannels(frequency int) {
if frequency <= 0 {
common.SysLog("CHANNEL_TEST_FREQUENCY is not set or invalid, skipping automatic channel test")
return
}
for {
time.Sleep(time.Duration(frequency) * time.Minute)
common.SysLog("testing all channels")

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"net/http"
"one-api/common"
"one-api/constant"
"one-api/model"
"strconv"
"strings"
@@ -125,7 +126,7 @@ func GetAllChannels(c *gin.Context) {
order = "id desc"
}
err := baseQuery.Order(order).Limit(pageSize).Offset((p-1)*pageSize).Omit("key").Find(&channelData).Error
err := baseQuery.Order(order).Limit(pageSize).Offset((p - 1) * pageSize).Omit("key").Find(&channelData).Error
if err != nil {
c.JSON(http.StatusOK, gin.H{"success": false, "message": err.Error()})
return
@@ -181,15 +182,15 @@ func FetchUpstreamModels(c *gin.Context) {
return
}
baseURL := common.ChannelBaseURLs[channel.Type]
baseURL := constant.ChannelBaseURLs[channel.Type]
if channel.GetBaseURL() != "" {
baseURL = channel.GetBaseURL()
}
url := fmt.Sprintf("%s/v1/models", baseURL)
switch channel.Type {
case common.ChannelTypeGemini:
case constant.ChannelTypeGemini:
url = fmt.Sprintf("%s/v1beta/openai/models", baseURL)
case common.ChannelTypeAli:
case constant.ChannelTypeAli:
url = fmt.Sprintf("%s/compatible-mode/v1/models", baseURL)
}
body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
@@ -213,7 +214,7 @@ func FetchUpstreamModels(c *gin.Context) {
var ids []string
for _, model := range result.Data {
id := model.ID
if channel.Type == common.ChannelTypeGemini {
if channel.Type == constant.ChannelTypeGemini {
id = strings.TrimPrefix(id, "models/")
}
ids = append(ids, id)
@@ -227,7 +228,7 @@ func FetchUpstreamModels(c *gin.Context) {
}
func FixChannelsAbilities(c *gin.Context) {
count, err := model.FixAbility()
success, fails, err := model.FixAbility()
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
@@ -238,7 +239,10 @@ func FixChannelsAbilities(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": count,
"data": gin.H{
"success": success,
"fails": fails,
},
})
}
@@ -376,9 +380,47 @@ func GetChannel(c *gin.Context) {
return
}
type AddChannelRequest struct {
Mode string `json:"mode"`
MultiKeyMode constant.MultiKeyMode `json:"multi_key_mode"`
Channel *model.Channel `json:"channel"`
}
func getVertexArrayKeys(keys string) ([]string, error) {
if keys == "" {
return nil, nil
}
var keyArray []interface{}
err := common.Unmarshal([]byte(keys), &keyArray)
if err != nil {
return nil, fmt.Errorf("批量添加 Vertex AI 必须使用标准的JsonArray格式例如[{key1}, {key2}...],请检查输入: %w", err)
}
cleanKeys := make([]string, 0, len(keyArray))
for _, key := range keyArray {
var keyStr string
switch v := key.(type) {
case string:
keyStr = strings.TrimSpace(v)
default:
bytes, err := json.Marshal(v)
if err != nil {
return nil, fmt.Errorf("Vertex AI key JSON 编码失败: %w", err)
}
keyStr = string(bytes)
}
if keyStr != "" {
cleanKeys = append(cleanKeys, keyStr)
}
}
if len(cleanKeys) == 0 {
return nil, fmt.Errorf("批量添加 Vertex AI 的 keys 不能为空")
}
return cleanKeys, nil
}
func AddChannel(c *gin.Context) {
channel := model.Channel{}
err := c.ShouldBindJSON(&channel)
addChannelRequest := AddChannelRequest{}
err := c.ShouldBindJSON(&addChannelRequest)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
@@ -386,49 +428,120 @@ func AddChannel(c *gin.Context) {
})
return
}
channel.CreatedTime = common.GetTimestamp()
keys := strings.Split(channel.Key, "\n")
if channel.Type == common.ChannelTypeVertexAi {
if channel.Other == "" {
err = addChannelRequest.Channel.ValidateSettings()
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "channel setting 格式错误:" + err.Error(),
})
return
}
if addChannelRequest.Channel == nil || addChannelRequest.Channel.Key == "" {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "channel cannot be empty",
})
return
}
// Validate the length of the model name
for _, m := range addChannelRequest.Channel.GetModels() {
if len(m) > 255 {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": fmt.Sprintf("模型名称过长: %s", m),
})
return
}
}
if addChannelRequest.Channel.Type == constant.ChannelTypeVertexAi {
if addChannelRequest.Channel.Other == "" {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "部署地区不能为空",
})
return
} else {
if common.IsJsonStr(channel.Other) {
// must have default
regionMap := common.StrToMap(channel.Other)
if regionMap["default"] == nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "部署地区必须包含default字段",
})
return
}
regionMap, err := common.StrToMap(addChannelRequest.Channel.Other)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "部署地区必须是标准的Json格式例如{\"default\": \"us-central1\", \"region2\": \"us-east1\"}",
})
return
}
if regionMap["default"] == nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "部署地区必须包含default字段",
})
return
}
}
keys = []string{channel.Key}
}
addChannelRequest.Channel.CreatedTime = common.GetTimestamp()
keys := make([]string, 0)
switch addChannelRequest.Mode {
case "multi_to_single":
addChannelRequest.Channel.ChannelInfo.IsMultiKey = true
addChannelRequest.Channel.ChannelInfo.MultiKeyMode = addChannelRequest.MultiKeyMode
if addChannelRequest.Channel.Type == constant.ChannelTypeVertexAi {
array, err := getVertexArrayKeys(addChannelRequest.Channel.Key)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),
})
return
}
addChannelRequest.Channel.Key = strings.Join(array, "\n")
} else {
cleanKeys := make([]string, 0)
for _, key := range strings.Split(addChannelRequest.Channel.Key, "\n") {
if key == "" {
continue
}
key = strings.TrimSpace(key)
cleanKeys = append(cleanKeys, key)
}
addChannelRequest.Channel.Key = strings.Join(cleanKeys, "\n")
}
keys = []string{addChannelRequest.Channel.Key}
case "batch":
if addChannelRequest.Channel.Type == constant.ChannelTypeVertexAi {
// multi json
keys, err = getVertexArrayKeys(addChannelRequest.Channel.Key)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),
})
return
}
} else {
keys = strings.Split(addChannelRequest.Channel.Key, "\n")
}
case "single":
keys = []string{addChannelRequest.Channel.Key}
default:
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "不支持的添加模式",
})
return
}
channels := make([]model.Channel, 0, len(keys))
for _, key := range keys {
if key == "" {
continue
}
localChannel := channel
localChannel := addChannelRequest.Channel
localChannel.Key = key
// Validate the length of the model name
models := strings.Split(localChannel.Models, ",")
for _, model := range models {
if len(model) > 255 {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": fmt.Sprintf("模型名称过长: %s", model),
})
return
}
}
channels = append(channels, localChannel)
channels = append(channels, *localChannel)
}
err = model.BatchInsertChannels(channels)
if err != nil {
@@ -613,7 +726,15 @@ func UpdateChannel(c *gin.Context) {
})
return
}
if channel.Type == common.ChannelTypeVertexAi {
err = channel.ValidateSettings()
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "channel setting 格式错误:" + err.Error(),
})
return
}
if channel.Type == constant.ChannelTypeVertexAi {
if channel.Other == "" {
c.JSON(http.StatusOK, gin.H{
"success": false,
@@ -621,16 +742,20 @@ func UpdateChannel(c *gin.Context) {
})
return
} else {
if common.IsJsonStr(channel.Other) {
// must have default
regionMap := common.StrToMap(channel.Other)
if regionMap["default"] == nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "部署地区必须包含default字段",
})
return
}
regionMap, err := common.StrToMap(channel.Other)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "部署地区必须是标准的Json格式例如{\"default\": \"us-central1\", \"region2\": \"us-east1\"}",
})
return
}
if regionMap["default"] == nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "部署地区必须包含default字段",
})
return
}
}
}
@@ -668,7 +793,7 @@ func FetchModels(c *gin.Context) {
baseURL := req.BaseURL
if baseURL == "" {
baseURL = common.ChannelBaseURLs[req.Type]
baseURL = constant.ChannelBaseURLs[req.Type]
}
client := &http.Client{}

View File

@@ -2,6 +2,7 @@ package controller
import (
"fmt"
"github.com/gin-gonic/gin"
"github.com/samber/lo"
"net/http"
"one-api/common"
@@ -14,10 +15,7 @@ import (
"one-api/relay/channel/minimax"
"one-api/relay/channel/moonshot"
relaycommon "one-api/relay/common"
relayconstant "one-api/relay/constant"
"one-api/setting"
"github.com/gin-gonic/gin"
)
// https://platform.openai.com/docs/api-reference/models/list
@@ -26,30 +24,10 @@ var openAIModels []dto.OpenAIModels
var openAIModelsMap map[string]dto.OpenAIModels
var channelId2Models map[int][]string
func getPermission() []dto.OpenAIModelPermission {
var permission []dto.OpenAIModelPermission
permission = append(permission, dto.OpenAIModelPermission{
Id: "modelperm-LwHkVFn8AcMItP432fKKDIKJ",
Object: "model_permission",
Created: 1626777600,
AllowCreateEngine: true,
AllowSampling: true,
AllowLogprobs: true,
AllowSearchIndices: false,
AllowView: true,
AllowFineTuning: false,
Organization: "*",
Group: nil,
IsBlocking: false,
})
return permission
}
func init() {
// https://platform.openai.com/docs/models/model-endpoint-compatibility
permission := getPermission()
for i := 0; i < relayconstant.APITypeDummy; i++ {
if i == relayconstant.APITypeAIProxyLibrary {
for i := 0; i < constant.APITypeDummy; i++ {
if i == constant.APITypeAIProxyLibrary {
continue
}
adaptor := relay.GetAdaptor(i)
@@ -57,69 +35,51 @@ func init() {
modelNames := adaptor.GetModelList()
for _, modelName := range modelNames {
openAIModels = append(openAIModels, dto.OpenAIModels{
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: channelName,
Permission: permission,
Root: modelName,
Parent: nil,
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: channelName,
})
}
}
for _, modelName := range ai360.ModelList {
openAIModels = append(openAIModels, dto.OpenAIModels{
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: ai360.ChannelName,
Permission: permission,
Root: modelName,
Parent: nil,
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: ai360.ChannelName,
})
}
for _, modelName := range moonshot.ModelList {
openAIModels = append(openAIModels, dto.OpenAIModels{
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: moonshot.ChannelName,
Permission: permission,
Root: modelName,
Parent: nil,
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: moonshot.ChannelName,
})
}
for _, modelName := range lingyiwanwu.ModelList {
openAIModels = append(openAIModels, dto.OpenAIModels{
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: lingyiwanwu.ChannelName,
Permission: permission,
Root: modelName,
Parent: nil,
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: lingyiwanwu.ChannelName,
})
}
for _, modelName := range minimax.ModelList {
openAIModels = append(openAIModels, dto.OpenAIModels{
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: minimax.ChannelName,
Permission: permission,
Root: modelName,
Parent: nil,
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: minimax.ChannelName,
})
}
for modelName, _ := range constant.MidjourneyModel2Action {
openAIModels = append(openAIModels, dto.OpenAIModels{
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: "midjourney",
Permission: permission,
Root: modelName,
Parent: nil,
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: "midjourney",
})
}
openAIModelsMap = make(map[string]dto.OpenAIModels)
@@ -127,9 +87,9 @@ func init() {
openAIModelsMap[aiModel.Id] = aiModel
}
channelId2Models = make(map[int][]string)
for i := 1; i <= common.ChannelTypeDummy; i++ {
apiType, success := relayconstant.ChannelType2APIType(i)
if !success || apiType == relayconstant.APITypeAIProxyLibrary {
for i := 1; i <= constant.ChannelTypeDummy; i++ {
apiType, success := common.ChannelType2APIType(i)
if !success || apiType == constant.APITypeAIProxyLibrary {
continue
}
meta := &relaycommon.RelayInfo{ChannelType: i}
@@ -144,11 +104,10 @@ func init() {
func ListModels(c *gin.Context) {
userOpenAiModels := make([]dto.OpenAIModels, 0)
permission := getPermission()
modelLimitEnable := c.GetBool("token_model_limit_enabled")
modelLimitEnable := common.GetContextKeyBool(c, constant.ContextKeyTokenModelLimitEnabled)
if modelLimitEnable {
s, ok := c.Get("token_model_limit")
s, ok := common.GetContextKey(c, constant.ContextKeyTokenModelLimit)
var tokenModelLimit map[string]bool
if ok {
tokenModelLimit = s.(map[string]bool)
@@ -156,23 +115,22 @@ func ListModels(c *gin.Context) {
tokenModelLimit = map[string]bool{}
}
for allowModel, _ := range tokenModelLimit {
if _, ok := openAIModelsMap[allowModel]; ok {
userOpenAiModels = append(userOpenAiModels, openAIModelsMap[allowModel])
if oaiModel, ok := openAIModelsMap[allowModel]; ok {
oaiModel.SupportedEndpointTypes = model.GetModelSupportEndpointTypes(allowModel)
userOpenAiModels = append(userOpenAiModels, oaiModel)
} else {
userOpenAiModels = append(userOpenAiModels, dto.OpenAIModels{
Id: allowModel,
Object: "model",
Created: 1626777600,
OwnedBy: "custom",
Permission: permission,
Root: allowModel,
Parent: nil,
Id: allowModel,
Object: "model",
Created: 1626777600,
OwnedBy: "custom",
SupportedEndpointTypes: model.GetModelSupportEndpointTypes(allowModel),
})
}
}
} else {
userId := c.GetInt("id")
userGroup, err := model.GetUserGroup(userId, true)
userGroup, err := model.GetUserGroup(userId, false)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
@@ -181,14 +139,14 @@ func ListModels(c *gin.Context) {
return
}
group := userGroup
tokenGroup := c.GetString("token_group")
tokenGroup := common.GetContextKeyString(c, constant.ContextKeyTokenGroup)
if tokenGroup != "" {
group = tokenGroup
}
var models []string
if tokenGroup == "auto" {
for _, autoGroup := range setting.AutoGroups {
groupModels := model.GetGroupModels(autoGroup)
groupModels := model.GetGroupEnabledModels(autoGroup)
for _, g := range groupModels {
if !common.StringsContains(models, g) {
models = append(models, g)
@@ -196,20 +154,19 @@ func ListModels(c *gin.Context) {
}
}
} else {
models = model.GetGroupModels(group)
models = model.GetGroupEnabledModels(group)
}
for _, s := range models {
if _, ok := openAIModelsMap[s]; ok {
userOpenAiModels = append(userOpenAiModels, openAIModelsMap[s])
for _, modelName := range models {
if oaiModel, ok := openAIModelsMap[modelName]; ok {
oaiModel.SupportedEndpointTypes = model.GetModelSupportEndpointTypes(modelName)
userOpenAiModels = append(userOpenAiModels, oaiModel)
} else {
userOpenAiModels = append(userOpenAiModels, dto.OpenAIModels{
Id: s,
Object: "model",
Created: 1626777600,
OwnedBy: "custom",
Permission: permission,
Root: s,
Parent: nil,
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: "custom",
SupportedEndpointTypes: model.GetModelSupportEndpointTypes(modelName),
})
}
}

View File

@@ -3,45 +3,44 @@ package controller
import (
"errors"
"fmt"
"net/http"
"one-api/common"
"one-api/constant"
"one-api/dto"
"one-api/middleware"
"one-api/model"
"one-api/service"
"one-api/setting"
"one-api/types"
"time"
"github.com/gin-gonic/gin"
)
func Playground(c *gin.Context) {
var openaiErr *dto.OpenAIErrorWithStatusCode
var newAPIError *types.NewAPIError
defer func() {
if openaiErr != nil {
c.JSON(openaiErr.StatusCode, gin.H{
"error": openaiErr.Error,
if newAPIError != nil {
c.JSON(newAPIError.StatusCode, gin.H{
"error": newAPIError.ToOpenAIError(),
})
}
}()
useAccessToken := c.GetBool("use_access_token")
if useAccessToken {
openaiErr = service.OpenAIErrorWrapperLocal(errors.New("暂不支持使用 access token"), "access_token_not_supported", http.StatusBadRequest)
newAPIError = types.NewError(errors.New("暂不支持使用 access token"), types.ErrorCodeAccessDenied)
return
}
playgroundRequest := &dto.PlayGroundRequest{}
err := common.UnmarshalBodyReusable(c, playgroundRequest)
if err != nil {
openaiErr = service.OpenAIErrorWrapperLocal(err, "unmarshal_request_failed", http.StatusBadRequest)
newAPIError = types.NewError(err, types.ErrorCodeInvalidRequest)
return
}
if playgroundRequest.Model == "" {
openaiErr = service.OpenAIErrorWrapperLocal(errors.New("请选择模型"), "model_required", http.StatusBadRequest)
newAPIError = types.NewError(errors.New("请选择模型"), types.ErrorCodeInvalidRequest)
return
}
c.Set("original_model", playgroundRequest.Model)
@@ -52,26 +51,32 @@ func Playground(c *gin.Context) {
group = userGroup
} else {
if !setting.GroupInUserUsableGroups(group) && group != userGroup {
openaiErr = service.OpenAIErrorWrapperLocal(errors.New("无权访问该分组"), "group_not_allowed", http.StatusForbidden)
newAPIError = types.NewError(errors.New("无权访问该分组"), types.ErrorCodeAccessDenied)
return
}
c.Set("group", group)
}
c.Set("token_name", "playground-"+group)
channel, finalGroup, err := model.CacheGetRandomSatisfiedChannel(c, group, playgroundRequest.Model, 0)
userId := c.GetInt("id")
//c.Set("token_name", "playground-"+group)
tempToken := &model.Token{
UserId: userId,
Name: fmt.Sprintf("playground-%s", group),
Group: group,
}
_ = middleware.SetupContextForToken(c, tempToken)
_, err = getChannel(c, group, playgroundRequest.Model, 0)
if err != nil {
message := fmt.Sprintf("当前分组 %s 下对于模型 %s 无可用渠道", finalGroup, playgroundRequest.Model)
openaiErr = service.OpenAIErrorWrapperLocal(errors.New(message), "get_playground_channel_failed", http.StatusInternalServerError)
newAPIError = types.NewError(err, types.ErrorCodeGetChannelFailed)
return
}
middleware.SetupContextForSelectedChannel(c, channel, playgroundRequest.Model)
c.Set(constant.ContextKeyRequestStartTime, time.Now())
//middleware.SetupContextForSelectedChannel(c, channel, playgroundRequest.Model)
common.SetContextKey(c, constant.ContextKeyRequestStartTime, time.Now())
// Write user context to ensure acceptUnsetRatio is available
userId := c.GetInt("id")
userCache, err := model.GetUserCache(userId)
if err != nil {
openaiErr = service.OpenAIErrorWrapperLocal(err, "get_user_cache_failed", http.StatusInternalServerError)
newAPIError = types.NewError(err, types.ErrorCodeQueryDataError)
return
}
userCache.WriteContext(c)

View File

@@ -8,23 +8,24 @@ import (
"log"
"net/http"
"one-api/common"
"one-api/constant"
constant2 "one-api/constant"
"one-api/dto"
"one-api/middleware"
"one-api/model"
"one-api/relay"
"one-api/relay/constant"
relayconstant "one-api/relay/constant"
"one-api/relay/helper"
"one-api/service"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
)
func relayHandler(c *gin.Context, relayMode int) *dto.OpenAIErrorWithStatusCode {
var err *dto.OpenAIErrorWithStatusCode
func relayHandler(c *gin.Context, relayMode int) *types.NewAPIError {
var err *types.NewAPIError
switch relayMode {
case relayconstant.RelayModeImagesGenerations, relayconstant.RelayModeImagesEdits:
err = relay.ImageHelper(c)
@@ -55,43 +56,43 @@ func relayHandler(c *gin.Context, relayMode int) *dto.OpenAIErrorWithStatusCode
userGroup := c.GetString("group")
channelId := c.GetInt("channel_id")
other := make(map[string]interface{})
other["error_type"] = err.Error.Type
other["error_code"] = err.Error.Code
other["error_type"] = err.ErrorType
other["error_code"] = err.GetErrorCode()
other["status_code"] = err.StatusCode
other["channel_id"] = channelId
other["channel_name"] = c.GetString("channel_name")
other["channel_type"] = c.GetInt("channel_type")
model.RecordErrorLog(c, userId, channelId, modelName, tokenName, err.Error.Message, tokenId, 0, false, userGroup, other)
model.RecordErrorLog(c, userId, channelId, modelName, tokenName, err.Error(), tokenId, 0, false, userGroup, other)
}
return err
}
func Relay(c *gin.Context) {
relayMode := constant.Path2RelayMode(c.Request.URL.Path)
relayMode := relayconstant.Path2RelayMode(c.Request.URL.Path)
requestId := c.GetString(common.RequestIdKey)
group := c.GetString("group")
originalModel := c.GetString("original_model")
var openaiErr *dto.OpenAIErrorWithStatusCode
var newAPIError *types.NewAPIError
for i := 0; i <= common.RetryTimes; i++ {
channel, err := getChannel(c, group, originalModel, i)
if err != nil {
common.LogError(c, err.Error())
openaiErr = service.OpenAIErrorWrapperLocal(err, "get_channel_failed", http.StatusInternalServerError)
newAPIError = types.NewError(err, types.ErrorCodeGetChannelFailed)
break
}
openaiErr = relayRequest(c, relayMode, channel)
newAPIError = relayRequest(c, relayMode, channel)
if openaiErr == nil {
if newAPIError == nil {
return // 成功处理请求,直接返回
}
go processChannelError(c, channel.Id, channel.Type, channel.Name, channel.GetAutoBan(), openaiErr)
go processChannelError(c, channel.Id, channel.Type, channel.Name, channel.GetAutoBan(), newAPIError)
if !shouldRetry(c, openaiErr, common.RetryTimes-i) {
if !shouldRetry(c, newAPIError, common.RetryTimes-i) {
break
}
}
@@ -101,14 +102,14 @@ func Relay(c *gin.Context) {
common.LogInfo(c, retryLogStr)
}
if openaiErr != nil {
if openaiErr.StatusCode == http.StatusTooManyRequests {
common.LogError(c, fmt.Sprintf("origin 429 error: %s", openaiErr.Error.Message))
openaiErr.Error.Message = "当前分组上游负载已饱和,请稍后再试"
if newAPIError != nil {
if newAPIError.StatusCode == http.StatusTooManyRequests {
common.LogError(c, fmt.Sprintf("origin 429 error: %s", newAPIError.Error()))
newAPIError.SetMessage("当前分组上游负载已饱和,请稍后再试")
}
openaiErr.Error.Message = common.MessageWithRequestId(openaiErr.Error.Message, requestId)
c.JSON(openaiErr.StatusCode, gin.H{
"error": openaiErr.Error,
newAPIError.SetMessage(common.MessageWithRequestId(newAPIError.Error(), requestId))
c.JSON(newAPIError.StatusCode, gin.H{
"error": newAPIError.ToOpenAIError(),
})
}
}
@@ -127,35 +128,34 @@ func WssRelay(c *gin.Context) {
defer ws.Close()
if err != nil {
openaiErr := service.OpenAIErrorWrapper(err, "get_channel_failed", http.StatusInternalServerError)
helper.WssError(c, ws, openaiErr.Error)
helper.WssError(c, ws, types.NewError(err, types.ErrorCodeGetChannelFailed).ToOpenAIError())
return
}
relayMode := constant.Path2RelayMode(c.Request.URL.Path)
relayMode := relayconstant.Path2RelayMode(c.Request.URL.Path)
requestId := c.GetString(common.RequestIdKey)
group := c.GetString("group")
//wss://api.openai.com/v1/realtime?model=gpt-4o-realtime-preview-2024-10-01
originalModel := c.GetString("original_model")
var openaiErr *dto.OpenAIErrorWithStatusCode
var newAPIError *types.NewAPIError
for i := 0; i <= common.RetryTimes; i++ {
channel, err := getChannel(c, group, originalModel, i)
if err != nil {
common.LogError(c, err.Error())
openaiErr = service.OpenAIErrorWrapperLocal(err, "get_channel_failed", http.StatusInternalServerError)
newAPIError = types.NewError(err, types.ErrorCodeGetChannelFailed)
break
}
openaiErr = wssRequest(c, ws, relayMode, channel)
newAPIError = wssRequest(c, ws, relayMode, channel)
if openaiErr == nil {
if newAPIError == nil {
return // 成功处理请求,直接返回
}
go processChannelError(c, channel.Id, channel.Type, channel.Name, channel.GetAutoBan(), openaiErr)
go processChannelError(c, channel.Id, channel.Type, channel.Name, channel.GetAutoBan(), newAPIError)
if !shouldRetry(c, openaiErr, common.RetryTimes-i) {
if !shouldRetry(c, newAPIError, common.RetryTimes-i) {
break
}
}
@@ -165,12 +165,12 @@ func WssRelay(c *gin.Context) {
common.LogInfo(c, retryLogStr)
}
if openaiErr != nil {
if openaiErr.StatusCode == http.StatusTooManyRequests {
openaiErr.Error.Message = "当前分组上游负载已饱和,请稍后再试"
if newAPIError != nil {
if newAPIError.StatusCode == http.StatusTooManyRequests {
newAPIError.SetMessage("当前分组上游负载已饱和,请稍后再试")
}
openaiErr.Error.Message = common.MessageWithRequestId(openaiErr.Error.Message, requestId)
helper.WssError(c, ws, openaiErr.Error)
newAPIError.SetMessage(common.MessageWithRequestId(newAPIError.Error(), requestId))
helper.WssError(c, ws, newAPIError.ToOpenAIError())
}
}
@@ -179,27 +179,25 @@ func RelayClaude(c *gin.Context) {
requestId := c.GetString(common.RequestIdKey)
group := c.GetString("group")
originalModel := c.GetString("original_model")
var claudeErr *dto.ClaudeErrorWithStatusCode
var newAPIError *types.NewAPIError
for i := 0; i <= common.RetryTimes; i++ {
channel, err := getChannel(c, group, originalModel, i)
if err != nil {
common.LogError(c, err.Error())
claudeErr = service.ClaudeErrorWrapperLocal(err, "get_channel_failed", http.StatusInternalServerError)
newAPIError = types.NewError(err, types.ErrorCodeGetChannelFailed)
break
}
claudeErr = claudeRequest(c, channel)
newAPIError = claudeRequest(c, channel)
if claudeErr == nil {
if newAPIError == nil {
return // 成功处理请求,直接返回
}
openaiErr := service.ClaudeErrorToOpenAIError(claudeErr)
go processChannelError(c, channel.Id, channel.Type, channel.Name, channel.GetAutoBan(), newAPIError)
go processChannelError(c, channel.Id, channel.Type, channel.Name, channel.GetAutoBan(), openaiErr)
if !shouldRetry(c, openaiErr, common.RetryTimes-i) {
if !shouldRetry(c, newAPIError, common.RetryTimes-i) {
break
}
}
@@ -209,30 +207,30 @@ func RelayClaude(c *gin.Context) {
common.LogInfo(c, retryLogStr)
}
if claudeErr != nil {
claudeErr.Error.Message = common.MessageWithRequestId(claudeErr.Error.Message, requestId)
c.JSON(claudeErr.StatusCode, gin.H{
if newAPIError != nil {
newAPIError.SetMessage(common.MessageWithRequestId(newAPIError.Error(), requestId))
c.JSON(newAPIError.StatusCode, gin.H{
"type": "error",
"error": claudeErr.Error,
"error": newAPIError.ToClaudeError(),
})
}
}
func relayRequest(c *gin.Context, relayMode int, channel *model.Channel) *dto.OpenAIErrorWithStatusCode {
func relayRequest(c *gin.Context, relayMode int, channel *model.Channel) *types.NewAPIError {
addUsedChannel(c, channel.Id)
requestBody, _ := common.GetRequestBody(c)
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody))
return relayHandler(c, relayMode)
}
func wssRequest(c *gin.Context, ws *websocket.Conn, relayMode int, channel *model.Channel) *dto.OpenAIErrorWithStatusCode {
func wssRequest(c *gin.Context, ws *websocket.Conn, relayMode int, channel *model.Channel) *types.NewAPIError {
addUsedChannel(c, channel.Id)
requestBody, _ := common.GetRequestBody(c)
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody))
return relay.WssHelper(c, ws)
}
func claudeRequest(c *gin.Context, channel *model.Channel) *dto.ClaudeErrorWithStatusCode {
func claudeRequest(c *gin.Context, channel *model.Channel) *types.NewAPIError {
addUsedChannel(c, channel.Id)
requestBody, _ := common.GetRequestBody(c)
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody))
@@ -259,19 +257,25 @@ func getChannel(c *gin.Context, group, originalModel string, retryCount int) (*m
AutoBan: &autoBanInt,
}, nil
}
channel, _, err := model.CacheGetRandomSatisfiedChannel(c, group, originalModel, retryCount)
channel, selectGroup, err := model.CacheGetRandomSatisfiedChannel(c, group, originalModel, retryCount)
if err != nil {
return nil, errors.New(fmt.Sprintf("获取重试渠道失败: %s", err.Error()))
if group == "auto" {
return nil, errors.New(fmt.Sprintf("获取自动分组下模型 %s 的可用渠道失败: %s", originalModel, err.Error()))
}
return nil, errors.New(fmt.Sprintf("获取分组 %s 下模型 %s 的可用渠道失败: %s", selectGroup, originalModel, err.Error()))
}
middleware.SetupContextForSelectedChannel(c, channel, originalModel)
return channel, nil
}
func shouldRetry(c *gin.Context, openaiErr *dto.OpenAIErrorWithStatusCode, retryTimes int) bool {
func shouldRetry(c *gin.Context, openaiErr *types.NewAPIError, retryTimes int) bool {
if openaiErr == nil {
return false
}
if openaiErr.LocalError {
if types.IsChannelError(openaiErr) {
return true
}
if types.IsLocalError(openaiErr) {
return false
}
if retryTimes <= 0 {
@@ -295,7 +299,7 @@ func shouldRetry(c *gin.Context, openaiErr *dto.OpenAIErrorWithStatusCode, retry
}
if openaiErr.StatusCode == http.StatusBadRequest {
channelType := c.GetInt("channel_type")
if channelType == common.ChannelTypeAnthropic {
if channelType == constant.ChannelTypeAnthropic {
return true
}
return false
@@ -310,12 +314,12 @@ func shouldRetry(c *gin.Context, openaiErr *dto.OpenAIErrorWithStatusCode, retry
return true
}
func processChannelError(c *gin.Context, channelId int, channelType int, channelName string, autoBan bool, err *dto.OpenAIErrorWithStatusCode) {
func processChannelError(c *gin.Context, channelId int, channelType int, channelName string, autoBan bool, err *types.NewAPIError) {
// 不要使用context获取渠道信息异步处理时可能会出现渠道信息不一致的情况
// do not use context to get channel info, there may be inconsistent channel info when processing asynchronously
common.LogError(c, fmt.Sprintf("relay error (channel #%d, status code: %d): %s", channelId, err.StatusCode, err.Error.Message))
common.LogError(c, fmt.Sprintf("relay error (channel #%d, status code: %d): %s", channelId, err.StatusCode, err.Error()))
if service.ShouldDisableChannel(channelType, err) && autoBan {
service.DisableChannel(channelId, channelName, err.Error.Message)
service.DisableChannel(channelId, channelName, err.Error())
}
}
@@ -388,9 +392,10 @@ func RelayTask(c *gin.Context) {
retryTimes = 0
}
for i := 0; shouldRetryTaskRelay(c, channelId, taskErr, retryTimes) && i < retryTimes; i++ {
channel, _, err := model.CacheGetRandomSatisfiedChannel(c, group, originalModel, i)
channel, err := getChannel(c, group, originalModel, i)
if err != nil {
common.LogError(c, fmt.Sprintf("CacheGetRandomSatisfiedChannel failed: %s", err.Error()))
taskErr = service.TaskErrorWrapperLocal(err, "get_channel_failed", http.StatusInternalServerError)
break
}
channelId = channel.Id
@@ -398,7 +403,7 @@ func RelayTask(c *gin.Context) {
useChannel = append(useChannel, fmt.Sprintf("%d", channelId))
c.Set("use_channel", useChannel)
common.LogInfo(c, fmt.Sprintf("using channel #%d to retry (remain times %d)", channel.Id, i))
middleware.SetupContextForSelectedChannel(c, channel, originalModel)
//middleware.SetupContextForSelectedChannel(c, channel, originalModel)
requestBody, err := common.GetRequestBody(c)
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody))

View File

@@ -74,8 +74,8 @@ func UpdateTaskByPlatform(platform constant.TaskPlatform, taskChannelM map[int][
//_ = UpdateMidjourneyTaskAll(context.Background(), tasks)
case constant.TaskPlatformSuno:
_ = UpdateSunoTaskAll(context.Background(), taskChannelM, taskM)
case constant.TaskPlatformKling:
_ = UpdateVideoTaskAll(context.Background(), taskChannelM, taskM)
case constant.TaskPlatformKling, constant.TaskPlatformJimeng:
_ = UpdateVideoTaskAll(context.Background(), platform, taskChannelM, taskM)
default:
common.SysLog("未知平台")
}

View File

@@ -2,27 +2,26 @@ package controller
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"one-api/common"
"one-api/constant"
"one-api/model"
"one-api/relay"
"one-api/relay/channel"
"time"
)
func UpdateVideoTaskAll(ctx context.Context, taskChannelM map[int][]string, taskM map[string]*model.Task) error {
func UpdateVideoTaskAll(ctx context.Context, platform constant.TaskPlatform, taskChannelM map[int][]string, taskM map[string]*model.Task) error {
for channelId, taskIds := range taskChannelM {
if err := updateVideoTaskAll(ctx, channelId, taskIds, taskM); err != nil {
if err := updateVideoTaskAll(ctx, platform, channelId, taskIds, taskM); err != nil {
common.LogError(ctx, fmt.Sprintf("Channel #%d failed to update video async tasks: %s", channelId, err.Error()))
}
}
return nil
}
func updateVideoTaskAll(ctx context.Context, channelId int, taskIds []string, taskM map[string]*model.Task) error {
func updateVideoTaskAll(ctx context.Context, platform constant.TaskPlatform, channelId int, taskIds []string, taskM map[string]*model.Task) error {
common.LogInfo(ctx, fmt.Sprintf("Channel #%d pending video tasks: %d", channelId, len(taskIds)))
if len(taskIds) == 0 {
return nil
@@ -39,7 +38,7 @@ func updateVideoTaskAll(ctx context.Context, channelId int, taskIds []string, ta
}
return fmt.Errorf("CacheGetChannel failed: %w", err)
}
adaptor := relay.GetTaskAdaptor(constant.TaskPlatformKling)
adaptor := relay.GetTaskAdaptor(platform)
if adaptor == nil {
return fmt.Errorf("video adaptor not found")
}
@@ -52,74 +51,68 @@ func updateVideoTaskAll(ctx context.Context, channelId int, taskIds []string, ta
}
func updateVideoSingleTask(ctx context.Context, adaptor channel.TaskAdaptor, channel *model.Channel, taskId string, taskM map[string]*model.Task) error {
baseURL := common.ChannelBaseURLs[channel.Type]
baseURL := constant.ChannelBaseURLs[channel.Type]
if channel.GetBaseURL() != "" {
baseURL = channel.GetBaseURL()
}
resp, err := adaptor.FetchTask(baseURL, channel.Key, map[string]any{
"task_id": taskId,
})
if err != nil {
return fmt.Errorf("FetchTask failed for task %s: %w", taskId, err)
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("Get Video Task status code: %d", resp.StatusCode)
}
defer resp.Body.Close()
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("ReadAll failed for task %s: %w", taskId, err)
}
var responseItem map[string]interface{}
err = json.Unmarshal(responseBody, &responseItem)
if err != nil {
common.LogError(ctx, fmt.Sprintf("Failed to parse video task response body: %v, body: %s", err, string(responseBody)))
return fmt.Errorf("Unmarshal failed for task %s: %w", taskId, err)
}
code, _ := responseItem["code"].(float64)
if code != 0 {
return fmt.Errorf("video task fetch failed for task %s", taskId)
}
data, ok := responseItem["data"].(map[string]interface{})
if !ok {
common.LogError(ctx, fmt.Sprintf("Video task data format error: %s", string(responseBody)))
return fmt.Errorf("video task data format error for task %s", taskId)
}
task := taskM[taskId]
if task == nil {
common.LogError(ctx, fmt.Sprintf("Task %s not found in taskM", taskId))
return fmt.Errorf("task %s not found", taskId)
}
if status, ok := data["task_status"].(string); ok {
switch status {
case "submitted", "queued":
task.Status = model.TaskStatusSubmitted
case "processing":
task.Status = model.TaskStatusInProgress
case "succeed":
task.Status = model.TaskStatusSuccess
task.Progress = "100%"
if url, err := adaptor.ParseResultUrl(responseItem); err == nil {
task.FailReason = url
} else {
common.LogWarn(ctx, fmt.Sprintf("Failed to get url from body for task %s: %s", task.TaskID, err.Error()))
}
case "failed":
task.Status = model.TaskStatusFailure
task.Progress = "100%"
if reason, ok := data["fail_reason"].(string); ok {
task.FailReason = reason
}
}
resp, err := adaptor.FetchTask(baseURL, channel.Key, map[string]any{
"task_id": taskId,
"action": task.Action,
})
if err != nil {
return fmt.Errorf("fetchTask failed for task %s: %w", taskId, err)
}
//if resp.StatusCode != http.StatusOK {
//return fmt.Errorf("get Video Task status code: %d", resp.StatusCode)
//}
defer resp.Body.Close()
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("readAll failed for task %s: %w", taskId, err)
}
// If task failed, refund quota
if task.Status == model.TaskStatusFailure {
taskResult, err := adaptor.ParseTaskResult(responseBody)
if err != nil {
return fmt.Errorf("parseTaskResult failed for task %s: %w", taskId, err)
}
//if taskResult.Code != 0 {
// return fmt.Errorf("video task fetch failed for task %s", taskId)
//}
now := time.Now().Unix()
if taskResult.Status == "" {
return fmt.Errorf("task %s status is empty", taskId)
}
task.Status = model.TaskStatus(taskResult.Status)
switch taskResult.Status {
case model.TaskStatusSubmitted:
task.Progress = "10%"
case model.TaskStatusQueued:
task.Progress = "20%"
case model.TaskStatusInProgress:
task.Progress = "30%"
if task.StartTime == 0 {
task.StartTime = now
}
case model.TaskStatusSuccess:
task.Progress = "100%"
if task.FinishTime == 0 {
task.FinishTime = now
}
task.FailReason = taskResult.Url
case model.TaskStatusFailure:
task.Status = model.TaskStatusFailure
task.Progress = "100%"
if task.FinishTime == 0 {
task.FinishTime = now
}
task.FailReason = taskResult.Reason
common.LogInfo(ctx, fmt.Sprintf("Task %s failed: %s", task.TaskID, task.FailReason))
quota := task.Quota
if quota != 0 {
@@ -129,6 +122,11 @@ func updateVideoSingleTask(ctx context.Context, adaptor channel.TaskAdaptor, cha
logContent := fmt.Sprintf("Video async task failed %s, refund %s", task.TaskID, common.LogQuota(quota))
model.RecordLog(task.UserId, model.LogTypeSystem, logContent)
}
default:
return fmt.Errorf("unknown task status %s for task %s", taskResult.Status, taskId)
}
if taskResult.Progress != "" {
task.Progress = taskResult.Progress
}
task.Data = responseBody

View File

@@ -6,6 +6,7 @@ import (
"net/http"
"net/url"
"one-api/common"
"one-api/dto"
"one-api/model"
"one-api/setting"
"strconv"
@@ -246,15 +247,15 @@ func Register(c *gin.Context) {
}
func GetAllUsers(c *gin.Context) {
p, _ := strconv.Atoi(c.Query("p"))
pageSize, _ := strconv.Atoi(c.Query("page_size"))
if p < 1 {
p = 1
pageInfo, err := common.GetPageQuery(c)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "parse page query failed",
})
return
}
if pageSize < 0 {
pageSize = common.ItemsPerPage
}
users, total, err := model.GetAllUsers((p-1)*pageSize, pageSize)
users, total, err := model.GetAllUsers(pageInfo)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
@@ -262,15 +263,13 @@ func GetAllUsers(c *gin.Context) {
})
return
}
pageInfo.SetTotal(int(total))
pageInfo.SetItems(users)
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": gin.H{
"items": users,
"total": total,
"page": p,
"page_size": pageSize,
},
"data": pageInfo,
})
return
}
@@ -489,7 +488,7 @@ func GetUserModels(c *gin.Context) {
groups := setting.GetUserUsableGroups(user.Group)
var models []string
for group := range groups {
for _, g := range model.GetGroupModels(group) {
for _, g := range model.GetGroupEnabledModels(group) {
if !common.StringsContains(models, g) {
models = append(models, g)
}
@@ -963,7 +962,7 @@ func UpdateUserSetting(c *gin.Context) {
}
// 验证预警类型
if req.QuotaWarningType != constant.NotifyTypeEmail && req.QuotaWarningType != constant.NotifyTypeWebhook {
if req.QuotaWarningType != dto.NotifyTypeEmail && req.QuotaWarningType != dto.NotifyTypeWebhook {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无效的预警类型",
@@ -981,7 +980,7 @@ func UpdateUserSetting(c *gin.Context) {
}
// 如果是webhook类型,验证webhook地址
if req.QuotaWarningType == constant.NotifyTypeWebhook {
if req.QuotaWarningType == dto.NotifyTypeWebhook {
if req.WebhookUrl == "" {
c.JSON(http.StatusOK, gin.H{
"success": false,
@@ -1000,7 +999,7 @@ func UpdateUserSetting(c *gin.Context) {
}
// 如果是邮件类型,验证邮箱地址
if req.QuotaWarningType == constant.NotifyTypeEmail && req.NotificationEmail != "" {
if req.QuotaWarningType == dto.NotifyTypeEmail && req.NotificationEmail != "" {
// 验证邮箱格式
if !strings.Contains(req.NotificationEmail, "@") {
c.JSON(http.StatusOK, gin.H{
@@ -1022,24 +1021,24 @@ func UpdateUserSetting(c *gin.Context) {
}
// 构建设置
settings := map[string]interface{}{
constant.UserSettingNotifyType: req.QuotaWarningType,
constant.UserSettingQuotaWarningThreshold: req.QuotaWarningThreshold,
"accept_unset_model_ratio_model": req.AcceptUnsetModelRatioModel,
constant.UserSettingRecordIpLog: req.RecordIpLog,
settings := dto.UserSetting{
NotifyType: req.QuotaWarningType,
QuotaWarningThreshold: req.QuotaWarningThreshold,
AcceptUnsetRatioModel: req.AcceptUnsetModelRatioModel,
RecordIpLog: req.RecordIpLog,
}
// 如果是webhook类型,添加webhook相关设置
if req.QuotaWarningType == constant.NotifyTypeWebhook {
settings[constant.UserSettingWebhookUrl] = req.WebhookUrl
if req.QuotaWarningType == dto.NotifyTypeWebhook {
settings.WebhookUrl = req.WebhookUrl
if req.WebhookSecret != "" {
settings[constant.UserSettingWebhookSecret] = req.WebhookSecret
settings.WebhookSecret = req.WebhookSecret
}
}
// 如果提供了通知邮箱,添加到设置中
if req.QuotaWarningType == constant.NotifyTypeEmail && req.NotificationEmail != "" {
settings[constant.UserSettingNotificationEmail] = req.NotificationEmail
if req.QuotaWarningType == dto.NotifyTypeEmail && req.NotificationEmail != "" {
settings.NotificationEmail = req.NotificationEmail
}
// 更新用户设置

View File

@@ -16,7 +16,7 @@ services:
- REDIS_CONN_STRING=redis://redis
- TZ=Asia/Shanghai
- ERROR_LOG_ENABLED=true # 是否启用错误日志记录
# - TIKTOKEN_CACHE_DIR=./tiktoken_cache # 如果需要使用tiktoken_cache请取消注释
# - STREAMING_TIMEOUT=120 # 流模式无响应超时时间单位秒默认120秒如果出现空补全可以尝试改为更大值
# - SESSION_SECRET=random_string # 多机部署时设置,必须修改这个随机字符串!!!!!!!
# - NODE_TYPE=slave # Uncomment for slave node in multi-node deployment
# - SYNC_FREQUENCY=60 # Uncomment if regular database syncing is needed

7
dto/channel_settings.go Normal file
View File

@@ -0,0 +1,7 @@
package dto
type ChannelSettings struct {
ForceFormat bool `json:"force_format,omitempty"`
ThinkingToContent bool `json:"thinking_to_content,omitempty"`
Proxy string `json:"proxy"`
}

View File

@@ -3,6 +3,7 @@ package dto
import (
"encoding/json"
"one-api/common"
"one-api/types"
)
type ClaudeMetadata struct {
@@ -228,7 +229,7 @@ type ClaudeResponse struct {
Completion string `json:"completion,omitempty"`
StopReason string `json:"stop_reason,omitempty"`
Model string `json:"model,omitempty"`
Error *ClaudeError `json:"error,omitempty"`
Error *types.ClaudeError `json:"error,omitempty"`
Usage *ClaudeUsage `json:"usage,omitempty"`
Index *int `json:"index,omitempty"`
ContentBlock *ClaudeMediaMessage `json:"content_block,omitempty"`

View File

@@ -1,5 +1,7 @@
package dto
import "one-api/types"
type OpenAIError struct {
Message string `json:"message"`
Type string `json:"type"`
@@ -14,11 +16,11 @@ type OpenAIErrorWithStatusCode struct {
}
type GeneralErrorResponse struct {
Error OpenAIError `json:"error"`
Message string `json:"message"`
Msg string `json:"msg"`
Err string `json:"err"`
ErrorMsg string `json:"error_msg"`
Error types.OpenAIError `json:"error"`
Message string `json:"message"`
Msg string `json:"msg"`
Err string `json:"err"`
ErrorMsg string `json:"error_msg"`
Header struct {
Message string `json:"message"`
} `json:"header"`

View File

@@ -57,6 +57,8 @@ type MidjourneyDto struct {
StartTime int64 `json:"startTime"`
FinishTime int64 `json:"finishTime"`
ImageUrl string `json:"imageUrl"`
VideoUrl string `json:"videoUrl"`
VideoUrls []ImgUrls `json:"videoUrls"`
Status string `json:"status"`
Progress string `json:"progress"`
FailReason string `json:"failReason"`
@@ -65,6 +67,10 @@ type MidjourneyDto struct {
Properties *Properties `json:"properties"`
}
type ImgUrls struct {
Url string `json:"url"`
}
type MidjourneyStatus struct {
Status int `json:"status"`
}

View File

@@ -65,8 +65,8 @@ type GeneralOpenAIRequest struct {
func (r *GeneralOpenAIRequest) ToMap() map[string]any {
result := make(map[string]any)
data, _ := common.EncodeJson(r)
_ = common.DecodeJson(data, &result)
data, _ := common.Marshal(r)
_ = common.Unmarshal(data, &result)
return result
}
@@ -646,4 +646,6 @@ type ResponsesToolsCall struct {
Name string `json:"name,omitempty"`
Description string `json:"description,omitempty"`
Parameters json.RawMessage `json:"parameters,omitempty"`
Function json.RawMessage `json:"function,omitempty"`
Container json.RawMessage `json:"container,omitempty"`
}

View File

@@ -1,6 +1,9 @@
package dto
import "encoding/json"
import (
"encoding/json"
"one-api/types"
)
type SimpleResponse struct {
Usage `json:"usage"`
@@ -28,7 +31,7 @@ type OpenAITextResponse struct {
Object string `json:"object"`
Created any `json:"created"`
Choices []OpenAITextResponseChoice `json:"choices"`
Error *OpenAIError `json:"error,omitempty"`
Error *types.OpenAIError `json:"error,omitempty"`
Usage `json:"usage"`
}
@@ -201,7 +204,7 @@ type OpenAIResponsesResponse struct {
Object string `json:"object"`
CreatedAt int `json:"created_at"`
Status string `json:"status"`
Error *OpenAIError `json:"error,omitempty"`
Error *types.OpenAIError `json:"error,omitempty"`
IncompleteDetails *IncompleteDetails `json:"incomplete_details,omitempty"`
Instructions string `json:"instructions"`
MaxOutputTokens int `json:"max_output_tokens"`

View File

@@ -1,26 +1,11 @@
package dto
type OpenAIModelPermission struct {
Id string `json:"id"`
Object string `json:"object"`
Created int `json:"created"`
AllowCreateEngine bool `json:"allow_create_engine"`
AllowSampling bool `json:"allow_sampling"`
AllowLogprobs bool `json:"allow_logprobs"`
AllowSearchIndices bool `json:"allow_search_indices"`
AllowView bool `json:"allow_view"`
AllowFineTuning bool `json:"allow_fine_tuning"`
Organization string `json:"organization"`
Group *string `json:"group"`
IsBlocking bool `json:"is_blocking"`
}
import "one-api/constant"
type OpenAIModels struct {
Id string `json:"id"`
Object string `json:"object"`
Created int `json:"created"`
OwnedBy string `json:"owned_by"`
Permission []OpenAIModelPermission `json:"permission"`
Root string `json:"root"`
Parent *string `json:"parent"`
Id string `json:"id"`
Object string `json:"object"`
Created int `json:"created"`
OwnedBy string `json:"owned_by"`
SupportedEndpointTypes []constant.EndpointType `json:"supported_endpoint_types"`
}

View File

@@ -1,5 +1,7 @@
package dto
import "one-api/types"
const (
RealtimeEventTypeError = "error"
RealtimeEventTypeSessionUpdate = "session.update"
@@ -23,12 +25,12 @@ type RealtimeEvent struct {
EventId string `json:"event_id"`
Type string `json:"type"`
//PreviousItemId string `json:"previous_item_id"`
Session *RealtimeSession `json:"session,omitempty"`
Item *RealtimeItem `json:"item,omitempty"`
Error *OpenAIError `json:"error,omitempty"`
Response *RealtimeResponse `json:"response,omitempty"`
Delta string `json:"delta,omitempty"`
Audio string `json:"audio,omitempty"`
Session *RealtimeSession `json:"session,omitempty"`
Item *RealtimeItem `json:"item,omitempty"`
Error *types.OpenAIError `json:"error,omitempty"`
Response *RealtimeResponse `json:"response,omitempty"`
Delta string `json:"delta,omitempty"`
Audio string `json:"audio,omitempty"`
}
type RealtimeResponse struct {

View File

@@ -4,7 +4,7 @@ type RerankRequest struct {
Documents []any `json:"documents"`
Query string `json:"query"`
Model string `json:"model"`
TopN int `json:"top_n"`
TopN int `json:"top_n,omitempty"`
ReturnDocuments *bool `json:"return_documents,omitempty"`
MaxChunkPerDoc int `json:"max_chunk_per_doc,omitempty"`
OverLapTokens int `json:"overlap_tokens,omitempty"`

16
dto/user_settings.go Normal file
View File

@@ -0,0 +1,16 @@
package dto
type UserSetting struct {
NotifyType string `json:"notify_type,omitempty"` // QuotaWarningType 额度预警类型
QuotaWarningThreshold float64 `json:"quota_warning_threshold,omitempty"` // QuotaWarningThreshold 额度预警阈值
WebhookUrl string `json:"webhook_url,omitempty"` // WebhookUrl webhook地址
WebhookSecret string `json:"webhook_secret,omitempty"` // WebhookSecret webhook密钥
NotificationEmail string `json:"notification_email,omitempty"` // NotificationEmail 通知邮箱地址
AcceptUnsetRatioModel bool `json:"accept_unset_model_ratio_model,omitempty"` // AcceptUnsetRatioModel 是否接受未设置价格的模型
RecordIpLog bool `json:"record_ip_log,omitempty"` // 是否记录请求和错误日志IP
}
var (
NotifyTypeEmail = "email" // Email 邮件
NotifyTypeWebhook = "webhook" // Webhook
)

1041
i18n/zh-cn.json Normal file

File diff suppressed because it is too large Load Diff

90
main.go
View File

@@ -32,14 +32,13 @@ var buildFS embed.FS
var indexPage []byte
func main() {
err := godotenv.Load(".env")
err := InitResources()
if err != nil {
common.SysLog("Support for .env file is disabled: " + err.Error())
common.FatalLog("failed to initialize resources: " + err.Error())
return
}
common.LoadEnv()
common.SetupLogger()
common.SysLog("New API " + common.Version + " started")
if os.Getenv("GIN_MODE") != "debug" {
gin.SetMode(gin.ReleaseMode)
@@ -47,19 +46,7 @@ func main() {
if common.DebugEnabled {
common.SysLog("running in debug mode")
}
// Initialize SQL Database
err = model.InitDB()
if err != nil {
common.FatalLog("failed to initialize database: " + err.Error())
}
model.CheckSetup()
// Initialize SQL Database
err = model.InitLogDB()
if err != nil {
common.FatalLog("failed to initialize database: " + err.Error())
}
defer func() {
err := model.CloseDB()
if err != nil {
@@ -67,21 +54,6 @@ func main() {
}
}()
// Initialize Redis
err = common.InitRedisClient()
if err != nil {
common.FatalLog("failed to initialize Redis: " + err.Error())
}
// Initialize model settings
ratio_setting.InitRatioSettings()
// Initialize constants
constant.InitEnv()
// Initialize options
model.InitOptionMap()
service.InitTokenEncoders()
if common.RedisEnabled {
// for compatibility with old versions
common.MemoryCacheEnabled = true
@@ -96,9 +68,9 @@ func main() {
if r := recover(); r != nil {
common.SysError(fmt.Sprintf("InitChannelCache panic: %v, retrying once", r))
// Retry once
_, fixErr := model.FixAbility()
_, _, fixErr := model.FixAbility()
if fixErr != nil {
common.SysError(fmt.Sprintf("InitChannelCache failed: %s", fixErr.Error()))
common.FatalLog(fmt.Sprintf("InitChannelCache failed: %s", fixErr.Error()))
}
}
}()
@@ -186,3 +158,53 @@ func main() {
common.FatalLog("failed to start HTTP server: " + err.Error())
}
}
func InitResources() error {
// Initialize resources here if needed
// This is a placeholder function for future resource initialization
err := godotenv.Load(".env")
if err != nil {
common.SysLog("未找到 .env 文件,使用默认环境变量,如果需要,请创建 .env 文件并设置相关变量")
common.SysLog("No .env file found, using default environment variables. If needed, please create a .env file and set the relevant variables.")
}
common.SetupLogger()
// 加载环境变量
common.InitEnv()
// Initialize model settings
ratio_setting.InitRatioSettings()
service.InitHttpClient()
service.InitTokenEncoders()
// Initialize SQL Database
err = model.InitDB()
if err != nil {
common.FatalLog("failed to initialize database: " + err.Error())
return err
}
model.CheckSetup()
// Initialize options, should after model.InitDB()
model.InitOptionMap()
// 初始化模型
model.GetPricing()
// Initialize SQL Database
err = model.InitLogDB()
if err != nil {
return err
}
// Initialize Redis
err = common.InitRedisClient()
if err != nil {
return err
}
return nil
}

View File

@@ -1,6 +1,7 @@
package middleware
import (
"fmt"
"net/http"
"one-api/common"
"one-api/model"
@@ -184,7 +185,7 @@ func TokenAuth() func(c *gin.Context) {
}
}
// gemini api 从query中获取key
if strings.HasPrefix(c.Request.URL.Path, "/v1beta/models/") {
if strings.HasPrefix(c.Request.URL.Path, "/v1beta/models/") || strings.HasPrefix(c.Request.URL.Path, "/v1/models/") {
skKey := c.Query("key")
if skKey != "" {
c.Request.Header.Set("Authorization", "Bearer "+skKey)
@@ -233,30 +234,41 @@ func TokenAuth() func(c *gin.Context) {
userCache.WriteContext(c)
c.Set("id", token.UserId)
c.Set("token_id", token.Id)
c.Set("token_key", token.Key)
c.Set("token_name", token.Name)
c.Set("token_unlimited_quota", token.UnlimitedQuota)
if !token.UnlimitedQuota {
c.Set("token_quota", token.RemainQuota)
}
if token.ModelLimitsEnabled {
c.Set("token_model_limit_enabled", true)
c.Set("token_model_limit", token.GetModelLimitsMap())
} else {
c.Set("token_model_limit_enabled", false)
}
c.Set("allow_ips", token.GetIpLimitsMap())
c.Set("token_group", token.Group)
if len(parts) > 1 {
if model.IsAdmin(token.UserId) {
c.Set("specific_channel_id", parts[1])
} else {
abortWithOpenAiMessage(c, http.StatusForbidden, "普通用户不支持指定渠道")
return
}
err = SetupContextForToken(c, token, parts...)
if err != nil {
return
}
c.Next()
}
}
func SetupContextForToken(c *gin.Context, token *model.Token, parts ...string) error {
if token == nil {
return fmt.Errorf("token is nil")
}
c.Set("id", token.UserId)
c.Set("token_id", token.Id)
c.Set("token_key", token.Key)
c.Set("token_name", token.Name)
c.Set("token_unlimited_quota", token.UnlimitedQuota)
if !token.UnlimitedQuota {
c.Set("token_quota", token.RemainQuota)
}
if token.ModelLimitsEnabled {
c.Set("token_model_limit_enabled", true)
c.Set("token_model_limit", token.GetModelLimitsMap())
} else {
c.Set("token_model_limit_enabled", false)
}
c.Set("allow_ips", token.GetIpLimitsMap())
c.Set("token_group", token.Group)
if len(parts) > 1 {
if model.IsAdmin(token.UserId) {
c.Set("specific_channel_id", parts[1])
} else {
abortWithOpenAiMessage(c, http.StatusForbidden, "普通用户不支持指定渠道")
return fmt.Errorf("普通用户不支持指定渠道")
}
}
return nil
}

View File

@@ -21,11 +21,12 @@ import (
type ModelRequest struct {
Model string `json:"model"`
Group string `json:"group,omitempty"`
}
func Distribute() func(c *gin.Context) {
return func(c *gin.Context) {
allowIpsMap := c.GetStringMap("allow_ips")
allowIpsMap := common.GetContextKeyStringMap(c, constant.ContextKeyTokenAllowIps)
if len(allowIpsMap) != 0 {
clientIp := c.ClientIP()
if _, ok := allowIpsMap[clientIp]; !ok {
@@ -34,14 +35,14 @@ func Distribute() func(c *gin.Context) {
}
}
var channel *model.Channel
channelId, ok := c.Get("specific_channel_id")
channelId, ok := common.GetContextKey(c, constant.ContextKeyTokenSpecificChannelId)
modelRequest, shouldSelectChannel, err := getModelRequest(c)
if err != nil {
abortWithOpenAiMessage(c, http.StatusBadRequest, "Invalid request, "+err.Error())
return
}
userGroup := c.GetString(constant.ContextKeyUserGroup)
tokenGroup := c.GetString("token_group")
userGroup := common.GetContextKeyString(c, constant.ContextKeyUserGroup)
tokenGroup := common.GetContextKeyString(c, constant.ContextKeyTokenGroup)
if tokenGroup != "" {
// check common.UserUsableGroups[userGroup]
if _, ok := setting.GetUserUsableGroups(userGroup)[tokenGroup]; !ok {
@@ -57,7 +58,7 @@ func Distribute() func(c *gin.Context) {
}
userGroup = tokenGroup
}
c.Set(constant.ContextKeyUsingGroup, userGroup)
common.SetContextKey(c, constant.ContextKeyUsingGroup, userGroup)
if ok {
id, err := strconv.Atoi(channelId.(string))
if err != nil {
@@ -76,9 +77,9 @@ func Distribute() func(c *gin.Context) {
} else {
// Select a channel for the user
// check token model mapping
modelLimitEnable := c.GetBool("token_model_limit_enabled")
modelLimitEnable := common.GetContextKeyBool(c, constant.ContextKeyTokenModelLimitEnabled)
if modelLimitEnable {
s, ok := c.Get("token_model_limit")
s, ok := common.GetContextKey(c, constant.ContextKeyTokenModelLimit)
var tokenModelLimit map[string]bool
if ok {
tokenModelLimit = s.(map[string]bool)
@@ -121,7 +122,7 @@ func Distribute() func(c *gin.Context) {
}
}
}
c.Set(constant.ContextKeyRequestStartTime, time.Now())
common.SetContextKey(c, constant.ContextKeyRequestStartTime, time.Now())
SetupContextForSelectedChannel(c, channel, modelRequest.Model)
c.Next()
}
@@ -171,15 +172,25 @@ func getModelRequest(c *gin.Context) (*ModelRequest, bool, error) {
c.Set("platform", string(constant.TaskPlatformSuno))
c.Set("relay_mode", relayMode)
} else if strings.Contains(c.Request.URL.Path, "/v1/video/generations") {
relayMode := relayconstant.Path2RelayKling(c.Request.Method, c.Request.URL.Path)
if relayMode == relayconstant.RelayModeKlingFetchByID {
shouldSelectChannel = false
err = common.UnmarshalBodyReusable(c, &modelRequest)
var platform string
var relayMode int
if strings.HasPrefix(modelRequest.Model, "jimeng") {
platform = string(constant.TaskPlatformJimeng)
relayMode = relayconstant.Path2RelayJimeng(c.Request.Method, c.Request.URL.Path)
if relayMode == relayconstant.RelayModeJimengFetchByID {
shouldSelectChannel = false
}
} else {
err = common.UnmarshalBodyReusable(c, &modelRequest)
platform = string(constant.TaskPlatformKling)
relayMode = relayconstant.Path2RelayKling(c.Request.Method, c.Request.URL.Path)
if relayMode == relayconstant.RelayModeKlingFetchByID {
shouldSelectChannel = false
}
}
c.Set("platform", string(constant.TaskPlatformKling))
c.Set("platform", platform)
c.Set("relay_mode", relayMode)
} else if strings.HasPrefix(c.Request.URL.Path, "/v1beta/models/") {
} else if strings.HasPrefix(c.Request.URL.Path, "/v1beta/models/") || strings.HasPrefix(c.Request.URL.Path, "/v1/models/") {
// Gemini API 路径处理: /v1beta/models/gemini-2.0-flash:generateContent
relayMode := relayconstant.RelayModeGemini
modelName := extractModelNameFromGeminiPath(c.Request.URL.Path)
@@ -227,6 +238,14 @@ func getModelRequest(c *gin.Context) (*ModelRequest, bool, error) {
}
c.Set("relay_mode", relayMode)
}
if strings.HasPrefix(c.Request.URL.Path, "/pg/chat/completions") {
// playground chat completions
err = common.UnmarshalBodyReusable(c, &modelRequest)
if err != nil {
return nil, false, errors.New("无效的请求, " + err.Error())
}
common.SetContextKey(c, constant.ContextKeyTokenGroup, modelRequest.Group)
}
return &modelRequest, shouldSelectChannel, nil
}
@@ -235,37 +254,42 @@ func SetupContextForSelectedChannel(c *gin.Context, channel *model.Channel, mode
if channel == nil {
return
}
c.Set("channel_id", channel.Id)
c.Set("channel_name", channel.Name)
c.Set("channel_type", channel.Type)
c.Set("channel_create_time", channel.CreatedTime)
c.Set("channel_setting", channel.GetSetting())
c.Set("param_override", channel.GetParamOverride())
if nil != channel.OpenAIOrganization && "" != *channel.OpenAIOrganization {
c.Set("channel_organization", *channel.OpenAIOrganization)
common.SetContextKey(c, constant.ContextKeyChannelId, channel.Id)
common.SetContextKey(c, constant.ContextKeyChannelName, channel.Name)
common.SetContextKey(c, constant.ContextKeyChannelType, channel.Type)
common.SetContextKey(c, constant.ContextKeyChannelCreateTime, channel.CreatedTime)
common.SetContextKey(c, constant.ContextKeyChannelSetting, channel.GetSetting())
common.SetContextKey(c, constant.ContextKeyChannelParamOverride, channel.GetParamOverride())
if nil != channel.OpenAIOrganization && *channel.OpenAIOrganization != "" {
common.SetContextKey(c, constant.ContextKeyChannelOrganization, *channel.OpenAIOrganization)
}
common.SetContextKey(c, constant.ContextKeyChannelAutoBan, channel.GetAutoBan())
common.SetContextKey(c, constant.ContextKeyChannelModelMapping, channel.GetModelMapping())
common.SetContextKey(c, constant.ContextKeyChannelStatusCodeMapping, channel.GetStatusCodeMapping())
if channel.ChannelInfo.IsMultiKey {
common.SetContextKey(c, constant.ContextKeyChannelIsMultiKey, true)
}
c.Set("auto_ban", channel.GetAutoBan())
c.Set("model_mapping", channel.GetModelMapping())
c.Set("status_code_mapping", channel.GetStatusCodeMapping())
c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key))
c.Set("base_url", channel.GetBaseURL())
common.SetContextKey(c, constant.ContextKeyChannelBaseUrl, channel.GetBaseURL())
// TODO: api_version统一
switch channel.Type {
case common.ChannelTypeAzure:
case constant.ChannelTypeAzure:
c.Set("api_version", channel.Other)
case common.ChannelTypeVertexAi:
case constant.ChannelTypeVertexAi:
c.Set("region", channel.Other)
case common.ChannelTypeXunfei:
case constant.ChannelTypeXunfei:
c.Set("api_version", channel.Other)
case common.ChannelTypeGemini:
case constant.ChannelTypeGemini:
c.Set("api_version", channel.Other)
case common.ChannelTypeAli:
case constant.ChannelTypeAli:
c.Set("plugin", channel.Other)
case common.ChannelCloudflare:
case constant.ChannelCloudflare:
c.Set("api_version", channel.Other)
case common.ChannelTypeMokaAI:
case constant.ChannelTypeMokaAI:
c.Set("api_version", channel.Other)
case common.ChannelTypeCoze:
case constant.ChannelTypeCoze:
c.Set("bot_id", channel.Other)
}
}

View File

@@ -0,0 +1,47 @@
package middleware
import (
"bytes"
"encoding/json"
"io"
"one-api/common"
"one-api/constant"
"github.com/gin-gonic/gin"
)
func KlingRequestConvert() func(c *gin.Context) {
return func(c *gin.Context) {
var originalReq map[string]interface{}
if err := common.UnmarshalBodyReusable(c, &originalReq); err != nil {
c.Next()
return
}
model, _ := originalReq["model"].(string)
prompt, _ := originalReq["prompt"].(string)
unifiedReq := map[string]interface{}{
"model": model,
"prompt": prompt,
"metadata": originalReq,
}
jsonData, err := json.Marshal(unifiedReq)
if err != nil {
c.Next()
return
}
// Rewrite request body and path
c.Request.Body = io.NopCloser(bytes.NewBuffer(jsonData))
c.Request.URL.Path = "/v1/video/generations"
if image := originalReq["image"]; image == "" {
c.Set("action", constant.TaskActionTextGenerate)
}
// We have to reset the request body for the next handlers
c.Set(common.KeyRequestBody, jsonData)
c.Next()
}
}

View File

@@ -177,9 +177,9 @@ func ModelRequestRateLimit() func(c *gin.Context) {
successMaxCount := setting.ModelRequestRateLimitSuccessCount
// 获取分组
group := c.GetString("token_group")
group := common.GetContextKeyString(c, constant.ContextKeyTokenGroup)
if group == "" {
group = c.GetString(constant.ContextKeyUserGroup)
group = common.GetContextKeyString(c, constant.ContextKeyUserGroup)
}
//获取分组的限流配置

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"one-api/common"
"strings"
"sync"
"github.com/samber/lo"
"gorm.io/gorm"
@@ -21,7 +22,22 @@ type Ability struct {
Tag *string `json:"tag" gorm:"index"`
}
func GetGroupModels(group string) []string {
type AbilityWithChannel struct {
Ability
ChannelType int `json:"channel_type"`
}
func GetAllEnableAbilityWithChannels() ([]AbilityWithChannel, error) {
var abilities []AbilityWithChannel
err := DB.Table("abilities").
Select("abilities.*, channels.type as channel_type").
Joins("left join channels on abilities.channel_id = channels.id").
Where("abilities.enabled = ?", true).
Scan(&abilities).Error
return abilities, err
}
func GetGroupEnabledModels(group string) []string {
var models []string
// Find distinct models
DB.Table("abilities").Where(commonGroupCol+" = ? and enabled = ?", group, true).Distinct("model").Pluck("model", &models)
@@ -46,7 +62,7 @@ func getPriority(group string, model string, retry int) (int, error) {
var priorities []int
err := DB.Model(&Ability{}).
Select("DISTINCT(priority)").
Where(commonGroupCol+" = ? and model = ? and enabled = ?", group, model, commonTrueVal).
Where(commonGroupCol+" = ? and model = ? and enabled = ?", group, model, true).
Order("priority DESC"). // 按优先级降序排序
Pluck("priority", &priorities).Error // Pluck用于将查询的结果直接扫描到一个切片中
@@ -72,14 +88,14 @@ func getPriority(group string, model string, retry int) (int, error) {
}
func getChannelQuery(group string, model string, retry int) *gorm.DB {
maxPrioritySubQuery := DB.Model(&Ability{}).Select("MAX(priority)").Where(commonGroupCol+" = ? and model = ? and enabled = ?", group, model, commonTrueVal)
channelQuery := DB.Where(commonGroupCol+" = ? and model = ? and enabled = ? and priority = (?)", group, model, commonTrueVal, maxPrioritySubQuery)
maxPrioritySubQuery := DB.Model(&Ability{}).Select("MAX(priority)").Where(commonGroupCol+" = ? and model = ? and enabled = ?", group, model, true)
channelQuery := DB.Where(commonGroupCol+" = ? and model = ? and enabled = ? and priority = (?)", group, model, true, maxPrioritySubQuery)
if retry != 0 {
priority, err := getPriority(group, model, retry)
if err != nil {
common.SysError(fmt.Sprintf("Get priority failed: %s", err.Error()))
} else {
channelQuery = DB.Where(commonGroupCol+" = ? and model = ? and enabled = ? and priority = ?", group, model, commonTrueVal, priority)
channelQuery = DB.Where(commonGroupCol+" = ? and model = ? and enabled = ? and priority = ?", group, model, true, priority)
}
}
@@ -257,74 +273,45 @@ func UpdateAbilityByTag(tag string, newTag *string, priority *int64, weight *uin
return DB.Model(&Ability{}).Where("tag = ?", tag).Updates(ability).Error
}
func FixAbility() (int, error) {
var channelIds []int
count := 0
// Find all channel ids from channel table
err := DB.Model(&Channel{}).Pluck("id", &channelIds).Error
var fixLock = sync.Mutex{}
func FixAbility() (int, int, error) {
lock := fixLock.TryLock()
if !lock {
return 0, 0, errors.New("已经有一个修复任务在运行中,请稍后再试")
}
defer fixLock.Unlock()
var channels []*Channel
// Find all channels
err := DB.Model(&Channel{}).Find(&channels).Error
if err != nil {
common.SysError(fmt.Sprintf("Get channel ids from channel table failed: %s", err.Error()))
return 0, err
return 0, 0, err
}
// Delete abilities of channels that are not in channel table - in batches to avoid too many placeholders
if len(channelIds) > 0 {
// Process deletion in chunks to avoid "too many placeholders" error
for _, chunk := range lo.Chunk(channelIds, 100) {
err = DB.Where("channel_id NOT IN (?)", chunk).Delete(&Ability{}).Error
if err != nil {
common.SysError(fmt.Sprintf("Delete abilities of channels (batch) that are not in channel table failed: %s", err.Error()))
return 0, err
}
}
} else {
// If no channels exist, delete all abilities
err = DB.Delete(&Ability{}).Error
if len(channels) == 0 {
return 0, 0, nil
}
successCount := 0
failCount := 0
for _, chunk := range lo.Chunk(channels, 50) {
ids := lo.Map(chunk, func(c *Channel, _ int) int { return c.Id })
// Delete all abilities of this channel
err = DB.Where("channel_id IN ?", ids).Delete(&Ability{}).Error
if err != nil {
common.SysError(fmt.Sprintf("Delete all abilities failed: %s", err.Error()))
return 0, err
common.SysError(fmt.Sprintf("Delete abilities failed: %s", err.Error()))
failCount += len(chunk)
continue
}
common.SysLog("Delete all abilities successfully")
return 0, nil
}
common.SysLog(fmt.Sprintf("Delete abilities of channels that are not in channel table successfully, ids: %v", channelIds))
count += len(channelIds)
// Use channelIds to find channel not in abilities table
var abilityChannelIds []int
err = DB.Table("abilities").Distinct("channel_id").Pluck("channel_id", &abilityChannelIds).Error
if err != nil {
common.SysError(fmt.Sprintf("Get channel ids from abilities table failed: %s", err.Error()))
return count, err
}
var channels []Channel
if len(abilityChannelIds) == 0 {
err = DB.Find(&channels).Error
} else {
// Process query in chunks to avoid "too many placeholders" error
err = nil
for _, chunk := range lo.Chunk(abilityChannelIds, 100) {
var channelsChunk []Channel
err = DB.Where("id NOT IN (?)", chunk).Find(&channelsChunk).Error
// Then add new abilities
for _, channel := range chunk {
err = channel.AddAbilities()
if err != nil {
common.SysError(fmt.Sprintf("Find channels not in abilities table failed: %s", err.Error()))
return count, err
common.SysError(fmt.Sprintf("Add abilities for channel %d failed: %s", channel.Id, err.Error()))
failCount++
} else {
successCount++
}
channels = append(channels, channelsChunk...)
}
}
for _, channel := range channels {
err := channel.UpdateAbilities(nil)
if err != nil {
common.SysError(fmt.Sprintf("Update abilities of channel %d failed: %s", channel.Id, err.Error()))
} else {
common.SysLog(fmt.Sprintf("Update abilities of channel %d successfully", channel.Id))
count++
}
}
InitChannelCache()
return count, nil
return successCount, failCount, nil
}

View File

@@ -1,8 +1,13 @@
package model
import (
"database/sql/driver"
"encoding/json"
"fmt"
"math/rand"
"one-api/common"
"one-api/constant"
"one-api/dto"
"strings"
"sync"
@@ -35,8 +40,100 @@ type Channel struct {
AutoBan *int `json:"auto_ban" gorm:"default:1"`
OtherInfo string `json:"other_info"`
Tag *string `json:"tag" gorm:"index"`
Setting *string `json:"setting" gorm:"type:text"`
Setting *string `json:"setting" gorm:"type:text"` // 渠道额外设置
ParamOverride *string `json:"param_override" gorm:"type:text"`
// add after v0.8.5
ChannelInfo ChannelInfo `json:"channel_info" gorm:"type:json"`
}
type ChannelInfo struct {
IsMultiKey bool `json:"is_multi_key"` // 是否多Key模式
MultiKeyStatusList map[int]int `json:"multi_key_status_list"` // key状态列表key index -> status
MultiKeyPollingIndex int `json:"multi_key_polling_index"` // 多Key模式下轮询的key索引
MultiKeyMode constant.MultiKeyMode `json:"multi_key_mode"`
}
// Value implements driver.Valuer interface
func (c ChannelInfo) Value() (driver.Value, error) {
return common.Marshal(&c)
}
// Scan implements sql.Scanner interface
func (c *ChannelInfo) Scan(value interface{}) error {
bytesValue, _ := value.([]byte)
return common.Unmarshal(bytesValue, c)
}
func (channel *Channel) getKeys() []string {
if channel.Key == "" {
return []string{}
}
// use \n to split keys
keys := strings.Split(strings.Trim(channel.Key, "\n"), "\n")
return keys
}
func (channel *Channel) GetNextEnabledKey() (string, error) {
// If not in multi-key mode, return the original key string directly.
if !channel.ChannelInfo.IsMultiKey {
return channel.Key, nil
}
// Obtain all keys (split by \n)
keys := channel.getKeys()
if len(keys) == 0 {
// No keys available, return error, should disable the channel
return "", fmt.Errorf("no valid keys in channel")
}
statusList := channel.ChannelInfo.MultiKeyStatusList
// helper to get key status, default to enabled when missing
getStatus := func(idx int) int {
if statusList == nil {
return common.ChannelStatusEnabled
}
if status, ok := statusList[idx]; ok {
return status
}
return common.ChannelStatusEnabled
}
// Collect indexes of enabled keys
enabledIdx := make([]int, 0, len(keys))
for i := range keys {
if getStatus(i) == common.ChannelStatusEnabled {
enabledIdx = append(enabledIdx, i)
}
}
// If no specific status list or none enabled, fall back to first key
if len(enabledIdx) == 0 {
return keys[0], nil
}
switch channel.ChannelInfo.MultiKeyMode {
case constant.MultiKeyModeRandom:
// Randomly pick one enabled key
return keys[enabledIdx[rand.Intn(len(enabledIdx))]], nil
case constant.MultiKeyModePolling:
// Start from the saved polling index and look for the next enabled key
start := channel.ChannelInfo.MultiKeyPollingIndex
if start < 0 || start >= len(keys) {
start = 0
}
for i := 0; i < len(keys); i++ {
idx := (start + i) % len(keys)
if getStatus(idx) == common.ChannelStatusEnabled {
// update polling index for next call (point to the next position)
channel.ChannelInfo.MultiKeyPollingIndex = (idx + 1) % len(keys)
return keys[idx], nil
}
}
// Fallback should not happen, but return first enabled key
return keys[enabledIdx[0]], nil
default:
// Unknown mode, default to first enabled key (or original key string)
return keys[enabledIdx[0]], nil
}
}
func (channel *Channel) GetModels() []string {
@@ -514,8 +611,19 @@ func SearchTags(keyword string, group string, model string, idSort bool) ([]*str
return tags, nil
}
func (channel *Channel) GetSetting() map[string]interface{} {
setting := make(map[string]interface{})
func (channel *Channel) ValidateSettings() error {
channelParams := &dto.ChannelSettings{}
if channel.Setting != nil && *channel.Setting != "" {
err := json.Unmarshal([]byte(*channel.Setting), channelParams)
if err != nil {
return err
}
}
return nil
}
func (channel *Channel) GetSetting() dto.ChannelSettings {
setting := dto.ChannelSettings{}
if channel.Setting != nil && *channel.Setting != "" {
err := json.Unmarshal([]byte(*channel.Setting), &setting)
if err != nil {
@@ -525,7 +633,7 @@ func (channel *Channel) GetSetting() map[string]interface{} {
return setting
}
func (channel *Channel) SetSetting(setting map[string]interface{}) {
func (channel *Channel) SetSetting(setting dto.ChannelSettings) {
settingBytes, err := json.Marshal(setting)
if err != nil {
common.SysError("failed to marshal setting: " + err.Error())

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"one-api/common"
"one-api/constant"
"os"
"strings"
"time"
@@ -50,7 +49,7 @@ func formatUserLogs(logs []*Log) {
for i := range logs {
logs[i].ChannelName = ""
var otherMap map[string]interface{}
otherMap = common.StrToMap(logs[i].Other)
otherMap, _ = common.StrToMap(logs[i].Other)
if otherMap != nil {
// delete admin
delete(otherMap, "admin_info")
@@ -100,10 +99,8 @@ func RecordErrorLog(c *gin.Context, userId int, channelId int, modelName string,
// 判断是否需要记录 IP
needRecordIp := false
if settingMap, err := GetUserSetting(userId, false); err == nil {
if v, ok := settingMap[constant.UserSettingRecordIpLog]; ok {
if vb, ok := v.(bool); ok && vb {
needRecordIp = true
}
if settingMap.RecordIpLog {
needRecordIp = true
}
}
log := &Log{
@@ -136,22 +133,34 @@ func RecordErrorLog(c *gin.Context, userId int, channelId int, modelName string,
}
}
func RecordConsumeLog(c *gin.Context, userId int, channelId int, promptTokens int, completionTokens int,
modelName string, tokenName string, quota int, content string, tokenId int, userQuota int, useTimeSeconds int,
isStream bool, group string, other map[string]interface{}) {
common.LogInfo(c, fmt.Sprintf("record consume log: userId=%d, 用户调用前余额=%d, channelId=%d, promptTokens=%d, completionTokens=%d, modelName=%s, tokenName=%s, quota=%d, content=%s", userId, userQuota, channelId, promptTokens, completionTokens, modelName, tokenName, quota, content))
type RecordConsumeLogParams struct {
ChannelId int `json:"channel_id"`
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
ModelName string `json:"model_name"`
TokenName string `json:"token_name"`
Quota int `json:"quota"`
Content string `json:"content"`
TokenId int `json:"token_id"`
UserQuota int `json:"user_quota"`
UseTimeSeconds int `json:"use_time_seconds"`
IsStream bool `json:"is_stream"`
Group string `json:"group"`
Other map[string]interface{} `json:"other"`
}
func RecordConsumeLog(c *gin.Context, userId int, params RecordConsumeLogParams) {
common.LogInfo(c, fmt.Sprintf("record consume log: userId=%d, params=%s", userId, common.GetJsonString(params)))
if !common.LogConsumeEnabled {
return
}
username := c.GetString("username")
otherStr := common.MapToJsonStr(other)
otherStr := common.MapToJsonStr(params.Other)
// 判断是否需要记录 IP
needRecordIp := false
if settingMap, err := GetUserSetting(userId, false); err == nil {
if v, ok := settingMap[constant.UserSettingRecordIpLog]; ok {
if vb, ok := v.(bool); ok && vb {
needRecordIp = true
}
if settingMap.RecordIpLog {
needRecordIp = true
}
}
log := &Log{
@@ -159,17 +168,17 @@ func RecordConsumeLog(c *gin.Context, userId int, channelId int, promptTokens in
Username: username,
CreatedAt: common.GetTimestamp(),
Type: LogTypeConsume,
Content: content,
PromptTokens: promptTokens,
CompletionTokens: completionTokens,
TokenName: tokenName,
ModelName: modelName,
Quota: quota,
ChannelId: channelId,
TokenId: tokenId,
UseTime: useTimeSeconds,
IsStream: isStream,
Group: group,
Content: params.Content,
PromptTokens: params.PromptTokens,
CompletionTokens: params.CompletionTokens,
TokenName: params.TokenName,
ModelName: params.ModelName,
Quota: params.Quota,
ChannelId: params.ChannelId,
TokenId: params.TokenId,
UseTime: params.UseTimeSeconds,
IsStream: params.IsStream,
Group: params.Group,
Ip: func() string {
if needRecordIp {
return c.ClientIP()
@@ -184,7 +193,7 @@ func RecordConsumeLog(c *gin.Context, userId int, channelId int, promptTokens in
}
if common.DataExportEnabled {
gopool.Go(func() {
LogQuotaData(userId, username, modelName, quota, common.GetTimestamp(), promptTokens+completionTokens)
LogQuotaData(userId, username, params.ModelName, params.Quota, common.GetTimestamp(), params.PromptTokens+params.CompletionTokens)
})
}
}

View File

@@ -57,7 +57,7 @@ func initCol() {
}
}
// log sql type and database type
common.SysLog("Using Log SQL Type: " + common.LogSqlType)
//common.SysLog("Using Log SQL Type: " + common.LogSqlType)
}
var DB *gorm.DB
@@ -225,12 +225,6 @@ func InitLogDB() (err error) {
if !common.IsMasterNode {
return nil
}
//if common.UsingMySQL {
// _, _ = sqlDB.Exec("DROP INDEX idx_channels_key ON channels;") // TODO: delete this line when most users have upgraded
// _, _ = sqlDB.Exec("ALTER TABLE midjourneys MODIFY action VARCHAR(40);") // TODO: delete this line when most users have upgraded
// _, _ = sqlDB.Exec("ALTER TABLE midjourneys MODIFY progress VARCHAR(30);") // TODO: delete this line when most users have upgraded
// _, _ = sqlDB.Exec("ALTER TABLE midjourneys MODIFY status VARCHAR(20);") // TODO: delete this line when most users have upgraded
//}
common.SysLog("database migration started")
err = migrateLOGDB()
return err

View File

@@ -14,6 +14,8 @@ type Midjourney struct {
StartTime int64 `json:"start_time" gorm:"index"`
FinishTime int64 `json:"finish_time" gorm:"index"`
ImageUrl string `json:"image_url"`
VideoUrl string `json:"video_url"`
VideoUrls string `json:"video_urls"`
Status string `json:"status" gorm:"type:varchar(20);index"`
Progress string `json:"progress" gorm:"type:varchar(30);index"`
FailReason string `json:"fail_reason"`

View File

@@ -1,20 +1,24 @@
package model
import (
"fmt"
"one-api/common"
"one-api/constant"
"one-api/setting/ratio_setting"
"one-api/types"
"sync"
"time"
)
type Pricing struct {
ModelName string `json:"model_name"`
QuotaType int `json:"quota_type"`
ModelRatio float64 `json:"model_ratio"`
ModelPrice float64 `json:"model_price"`
OwnerBy string `json:"owner_by"`
CompletionRatio float64 `json:"completion_ratio"`
EnableGroup []string `json:"enable_groups,omitempty"`
ModelName string `json:"model_name"`
QuotaType int `json:"quota_type"`
ModelRatio float64 `json:"model_ratio"`
ModelPrice float64 `json:"model_price"`
OwnerBy string `json:"owner_by"`
CompletionRatio float64 `json:"completion_ratio"`
EnableGroup []string `json:"enable_groups"`
SupportedEndpointTypes []constant.EndpointType `json:"supported_endpoint_types"`
}
var (
@@ -23,47 +27,89 @@ var (
updatePricingLock sync.Mutex
)
func GetPricing() []Pricing {
updatePricingLock.Lock()
defer updatePricingLock.Unlock()
var (
modelSupportEndpointTypes = make(map[string][]constant.EndpointType)
modelSupportEndpointsLock = sync.RWMutex{}
)
func GetPricing() []Pricing {
if time.Since(lastGetPricingTime) > time.Minute*1 || len(pricingMap) == 0 {
updatePricing()
updatePricingLock.Lock()
defer updatePricingLock.Unlock()
// Double check after acquiring the lock
if time.Since(lastGetPricingTime) > time.Minute*1 || len(pricingMap) == 0 {
modelSupportEndpointsLock.Lock()
defer modelSupportEndpointsLock.Unlock()
updatePricing()
}
}
//if group != "" {
// userPricingMap := make([]Pricing, 0)
// models := GetGroupModels(group)
// for _, pricing := range pricingMap {
// if !common.StringsContains(models, pricing.ModelName) {
// pricing.Available = false
// }
// userPricingMap = append(userPricingMap, pricing)
// }
// return userPricingMap
//}
return pricingMap
}
func GetModelSupportEndpointTypes(model string) []constant.EndpointType {
if model == "" {
return make([]constant.EndpointType, 0)
}
modelSupportEndpointsLock.RLock()
defer modelSupportEndpointsLock.RUnlock()
if endpoints, ok := modelSupportEndpointTypes[model]; ok {
return endpoints
}
return make([]constant.EndpointType, 0)
}
func updatePricing() {
//modelRatios := common.GetModelRatios()
enableAbilities := GetAllEnableAbilities()
modelGroupsMap := make(map[string][]string)
enableAbilities, err := GetAllEnableAbilityWithChannels()
if err != nil {
common.SysError(fmt.Sprintf("GetAllEnableAbilityWithChannels error: %v", err))
return
}
modelGroupsMap := make(map[string]*types.Set[string])
for _, ability := range enableAbilities {
groups := modelGroupsMap[ability.Model]
if groups == nil {
groups = make([]string, 0)
groups, ok := modelGroupsMap[ability.Model]
if !ok {
groups = types.NewSet[string]()
modelGroupsMap[ability.Model] = groups
}
if !common.StringsContains(groups, ability.Group) {
groups = append(groups, ability.Group)
groups.Add(ability.Group)
}
//这里使用切片而不是Set因为一个模型可能支持多个端点类型并且第一个端点是优先使用端点
modelSupportEndpointsStr := make(map[string][]string)
for _, ability := range enableAbilities {
endpoints, ok := modelSupportEndpointsStr[ability.Model]
if !ok {
endpoints = make([]string, 0)
modelSupportEndpointsStr[ability.Model] = endpoints
}
modelGroupsMap[ability.Model] = groups
channelTypes := common.GetEndpointTypesByChannelType(ability.ChannelType, ability.Model)
for _, channelType := range channelTypes {
if !common.StringsContains(endpoints, string(channelType)) {
endpoints = append(endpoints, string(channelType))
}
}
modelSupportEndpointsStr[ability.Model] = endpoints
}
modelSupportEndpointTypes = make(map[string][]constant.EndpointType)
for model, endpoints := range modelSupportEndpointsStr {
supportedEndpoints := make([]constant.EndpointType, 0)
for _, endpointStr := range endpoints {
endpointType := constant.EndpointType(endpointStr)
supportedEndpoints = append(supportedEndpoints, endpointType)
}
modelSupportEndpointTypes[model] = supportedEndpoints
}
pricingMap = make([]Pricing, 0)
for model, groups := range modelGroupsMap {
pricing := Pricing{
ModelName: model,
EnableGroup: groups,
ModelName: model,
EnableGroup: groups.Items(),
SupportedEndpointTypes: modelSupportEndpointTypes[model],
}
modelPrice, findPrice := ratio_setting.GetModelPrice(model, false)
if findPrice {

View File

@@ -10,7 +10,7 @@ import (
func cacheSetToken(token Token) error {
key := common.GenerateHMAC(token.Key)
token.Clean()
err := common.RedisHSetObj(fmt.Sprintf("token:%s", key), &token, time.Duration(constant.RedisKeyCacheSeconds())*time.Second)
err := common.RedisHSetObj(fmt.Sprintf("token:%s", key), &token, time.Duration(common.RedisKeyCacheSeconds())*time.Second)
if err != nil {
return err
}

View File

@@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"one-api/common"
"one-api/dto"
"strconv"
"strings"
@@ -68,14 +69,18 @@ func (user *User) SetAccessToken(token string) {
user.AccessToken = &token
}
func (user *User) GetSetting() map[string]interface{} {
if user.Setting == "" {
return nil
func (user *User) GetSetting() dto.UserSetting {
setting := dto.UserSetting{}
if user.Setting != "" {
err := json.Unmarshal([]byte(user.Setting), &setting)
if err != nil {
common.SysError("failed to unmarshal setting: " + err.Error())
}
}
return common.StrToMap(user.Setting)
return setting
}
func (user *User) SetSetting(setting map[string]interface{}) {
func (user *User) SetSetting(setting dto.UserSetting) {
settingBytes, err := json.Marshal(setting)
if err != nil {
common.SysError("failed to marshal setting: " + err.Error())
@@ -114,7 +119,7 @@ func GetMaxUserId() int {
return user.Id
}
func GetAllUsers(startIdx int, num int) (users []*User, total int64, err error) {
func GetAllUsers(pageInfo *common.PageInfo) (users []*User, total int64, err error) {
// Start transaction
tx := DB.Begin()
if tx.Error != nil {
@@ -134,7 +139,7 @@ func GetAllUsers(startIdx int, num int) (users []*User, total int64, err error)
}
// Get paginated users within same transaction
err = tx.Unscoped().Order("id desc").Limit(num).Offset(startIdx).Omit("password").Find(&users).Error
err = tx.Unscoped().Order("id desc").Limit(pageInfo.GetPageSize()).Offset(pageInfo.GetStartIdx()).Omit("password").Find(&users).Error
if err != nil {
tx.Rollback()
return nil, 0, err
@@ -626,7 +631,7 @@ func GetUserGroup(id int, fromDB bool) (group string, err error) {
}
// GetUserSetting gets setting from Redis first, falls back to DB if needed
func GetUserSetting(id int, fromDB bool) (settingMap map[string]interface{}, err error) {
func GetUserSetting(id int, fromDB bool) (settingMap dto.UserSetting, err error) {
var setting string
defer func() {
// Update Redis cache asynchronously on successful DB read
@@ -648,10 +653,12 @@ func GetUserSetting(id int, fromDB bool) (settingMap map[string]interface{}, err
fromDB = true
err = DB.Model(&User{}).Where("id = ?", id).Select("setting").Find(&setting).Error
if err != nil {
return map[string]interface{}{}, err
return settingMap, err
}
return common.StrToMap(setting), nil
userBase := &UserBase{
Setting: setting,
}
return userBase.GetSetting(), nil
}
func IncreaseUserQuota(id int, quota int, db bool) (err error) {

View File

@@ -1,10 +1,10 @@
package model
import (
"encoding/json"
"fmt"
"one-api/common"
"one-api/constant"
"one-api/dto"
"time"
"github.com/gin-gonic/gin"
@@ -24,28 +24,23 @@ type UserBase struct {
}
func (user *UserBase) WriteContext(c *gin.Context) {
c.Set(constant.ContextKeyUserGroup, user.Group)
c.Set(constant.ContextKeyUserQuota, user.Quota)
c.Set(constant.ContextKeyUserStatus, user.Status)
c.Set(constant.ContextKeyUserEmail, user.Email)
c.Set("username", user.Username)
c.Set(constant.ContextKeyUserSetting, user.GetSetting())
common.SetContextKey(c, constant.ContextKeyUserGroup, user.Group)
common.SetContextKey(c, constant.ContextKeyUserQuota, user.Quota)
common.SetContextKey(c, constant.ContextKeyUserStatus, user.Status)
common.SetContextKey(c, constant.ContextKeyUserEmail, user.Email)
common.SetContextKey(c, constant.ContextKeyUserName, user.Username)
common.SetContextKey(c, constant.ContextKeyUserSetting, user.GetSetting())
}
func (user *UserBase) GetSetting() map[string]interface{} {
if user.Setting == "" {
return nil
func (user *UserBase) GetSetting() dto.UserSetting {
setting := dto.UserSetting{}
if user.Setting != "" {
err := common.Unmarshal([]byte(user.Setting), &setting)
if err != nil {
common.SysError("failed to unmarshal setting: " + err.Error())
}
}
return common.StrToMap(user.Setting)
}
func (user *UserBase) SetSetting(setting map[string]interface{}) {
settingBytes, err := json.Marshal(setting)
if err != nil {
common.SysError("failed to marshal setting: " + err.Error())
return
}
user.Setting = string(settingBytes)
return setting
}
// getUserCacheKey returns the key for user cache
@@ -70,7 +65,7 @@ func updateUserCache(user User) error {
return common.RedisHSetObj(
getUserCacheKey(user.Id),
user.ToBaseUser(),
time.Duration(constant.RedisKeyCacheSeconds())*time.Second,
time.Duration(common.RedisKeyCacheSeconds())*time.Second,
)
}
@@ -174,11 +169,10 @@ func getUserNameCache(userId int) (string, error) {
return cache.Username, nil
}
func getUserSettingCache(userId int) (map[string]interface{}, error) {
setting := make(map[string]interface{})
func getUserSettingCache(userId int) (dto.UserSetting, error) {
cache, err := GetUserCache(userId)
if err != nil {
return setting, err
return dto.UserSetting{}, err
}
return cache.GetSetting(), nil
}

View File

@@ -3,7 +3,6 @@ package relay
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"net/http"
"one-api/common"
"one-api/dto"
@@ -12,7 +11,10 @@ import (
"one-api/relay/helper"
"one-api/service"
"one-api/setting"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
)
func getAndValidAudioRequest(c *gin.Context, info *relaycommon.RelayInfo) (*dto.AudioRequest, error) {
@@ -54,13 +56,13 @@ func getAndValidAudioRequest(c *gin.Context, info *relaycommon.RelayInfo) (*dto.
return audioRequest, nil
}
func AudioHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
func AudioHelper(c *gin.Context) (newAPIError *types.NewAPIError) {
relayInfo := relaycommon.GenRelayInfoOpenAIAudio(c)
audioRequest, err := getAndValidAudioRequest(c, relayInfo)
if err != nil {
common.LogError(c, fmt.Sprintf("getAndValidAudioRequest failed: %s", err.Error()))
return service.OpenAIErrorWrapper(err, "invalid_audio_request", http.StatusBadRequest)
return types.NewError(err, types.ErrorCodeInvalidRequest)
}
promptTokens := 0
@@ -73,7 +75,7 @@ func AudioHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
priceData, err := helper.ModelPriceHelper(c, relayInfo, preConsumedTokens, 0)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
return types.NewError(err, types.ErrorCodeModelPriceError)
}
preConsumedQuota, userQuota, openaiErr := preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
@@ -88,23 +90,23 @@ func AudioHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
err = helper.ModelMappedHelper(c, relayInfo, audioRequest)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_mapped_error", http.StatusInternalServerError)
return types.NewError(err, types.ErrorCodeChannelModelMappedError)
}
adaptor := GetAdaptor(relayInfo.ApiType)
if adaptor == nil {
return service.OpenAIErrorWrapperLocal(fmt.Errorf("invalid api type: %d", relayInfo.ApiType), "invalid_api_type", http.StatusBadRequest)
return types.NewError(fmt.Errorf("invalid api type: %d", relayInfo.ApiType), types.ErrorCodeInvalidApiType)
}
adaptor.Init(relayInfo)
ioReader, err := adaptor.ConvertAudioRequest(c, relayInfo, *audioRequest)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "convert_request_failed", http.StatusInternalServerError)
return types.NewError(err, types.ErrorCodeConvertRequestFailed)
}
resp, err := adaptor.DoRequest(c, relayInfo, ioReader)
if err != nil {
return service.OpenAIErrorWrapper(err, "do_request_failed", http.StatusInternalServerError)
return types.NewError(err, types.ErrorCodeDoRequestFailed)
}
statusCodeMappingStr := c.GetString("status_code_mapping")
@@ -112,18 +114,18 @@ func AudioHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
if resp != nil {
httpResp = resp.(*http.Response)
if httpResp.StatusCode != http.StatusOK {
openaiErr = service.RelayErrorHandler(httpResp, false)
newAPIError = service.RelayErrorHandler(httpResp, false)
// reset status code 重置状态码
service.ResetStatusCode(openaiErr, statusCodeMappingStr)
return openaiErr
service.ResetStatusCode(newAPIError, statusCodeMappingStr)
return newAPIError
}
}
usage, openaiErr := adaptor.DoResponse(c, httpResp, relayInfo)
if openaiErr != nil {
usage, newAPIError := adaptor.DoResponse(c, httpResp, relayInfo)
if newAPIError != nil {
// reset status code 重置状态码
service.ResetStatusCode(openaiErr, statusCodeMappingStr)
return openaiErr
service.ResetStatusCode(newAPIError, statusCodeMappingStr)
return newAPIError
}
postConsumeQuota(c, relayInfo, usage.(*dto.Usage), preConsumedQuota, userQuota, priceData, "")

View File

@@ -5,6 +5,7 @@ import (
"net/http"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -21,7 +22,7 @@ type Adaptor interface {
ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error)
ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error)
DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error)
DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode)
DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError)
GetModelList() []string
GetChannelName() string
ConvertClaudeRequest(c *gin.Context, info *relaycommon.RelayInfo, request *dto.ClaudeRequest) (any, error)
@@ -45,5 +46,5 @@ type TaskAdaptor interface {
// FetchTask
FetchTask(baseUrl, key string, body map[string]any) (*http.Response, error)
ParseResultUrl(resp map[string]any) (string, error)
ParseTaskResult(respBody []byte) (*relaycommon.TaskInfo, error)
}

View File

@@ -10,6 +10,7 @@ import (
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -30,7 +31,7 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
var fullRequestURL string
switch info.RelayMode {
case constant.RelayModeEmbeddings:
fullRequestURL = fmt.Sprintf("%s/api/v1/services/embeddings/text-embedding/text-embedding", info.BaseUrl)
fullRequestURL = fmt.Sprintf("%s/compatible-mode/v1/embeddings", info.BaseUrl)
case constant.RelayModeRerank:
fullRequestURL = fmt.Sprintf("%s/api/v1/services/rerank/text-rerank/text-rerank", info.BaseUrl)
case constant.RelayModeImagesGenerations:
@@ -82,7 +83,7 @@ func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dt
}
func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.EmbeddingRequest) (any, error) {
return embeddingRequestOpenAI2Ali(request), nil
return request, nil
}
func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.AudioRequest) (io.Reader, error) {
@@ -99,7 +100,7 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
switch info.RelayMode {
case constant.RelayModeImagesGenerations:
err, usage = aliImageHandler(c, resp, info)
@@ -109,9 +110,9 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
err, usage = RerankHandler(c, resp, info)
default:
if info.IsStream {
err, usage = openai.OaiStreamHandler(c, resp, info)
usage, err = openai.OaiStreamHandler(c, info, resp)
} else {
err, usage = openai.OpenaiHandler(c, resp, info)
usage, err = openai.OpenaiHandler(c, info, resp)
}
}
return

View File

@@ -4,15 +4,17 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/common"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/service"
"one-api/types"
"strings"
"time"
"github.com/gin-gonic/gin"
)
func oaiImage2Ali(request dto.ImageRequest) *AliImageRequest {
@@ -124,52 +126,46 @@ func responseAli2OpenAIImage(c *gin.Context, response *AliResponse, info *relayc
return &imageResponse
}
func aliImageHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func aliImageHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*types.NewAPIError, *dto.Usage) {
responseFormat := c.GetString("response_format")
var aliTaskResponse AliResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeReadResponseBodyFailed), nil
}
common.CloseResponseBodyGracefully(resp)
err = json.Unmarshal(responseBody, &aliTaskResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
if aliTaskResponse.Message != "" {
common.LogError(c, "ali_async_task_failed: "+aliTaskResponse.Message)
return service.OpenAIErrorWrapper(errors.New(aliTaskResponse.Message), "ali_async_task_failed", http.StatusInternalServerError), nil
return types.NewError(errors.New(aliTaskResponse.Message), types.ErrorCodeBadResponse), nil
}
aliResponse, _, err := asyncTaskWait(info, aliTaskResponse.Output.TaskId)
if err != nil {
return service.OpenAIErrorWrapper(err, "ali_async_task_wait_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponse), nil
}
if aliResponse.Output.TaskStatus != "SUCCEEDED" {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: aliResponse.Output.Message,
Type: "ali_error",
Param: "",
Code: aliResponse.Output.Code,
},
StatusCode: resp.StatusCode,
}, nil
return types.WithOpenAIError(types.OpenAIError{
Message: aliResponse.Output.Message,
Type: "ali_error",
Param: "",
Code: aliResponse.Output.Code,
}, resp.StatusCode), nil
}
fullTextResponse := responseAli2OpenAIImage(c, aliResponse, info, responseFormat)
jsonResponse, err := json.Marshal(fullTextResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, err = c.Writer.Write(jsonResponse)
return nil, nil
c.Writer.Write(jsonResponse)
return nil, &dto.Usage{}
}

View File

@@ -4,9 +4,10 @@ import (
"encoding/json"
"io"
"net/http"
"one-api/common"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/service"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -30,32 +31,26 @@ func ConvertRerankRequest(request dto.RerankRequest) *AliRerankRequest {
}
}
func RerankHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func RerankHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*types.NewAPIError, *dto.Usage) {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeReadResponseBodyFailed), nil
}
common.CloseResponseBodyGracefully(resp)
var aliResponse AliRerankResponse
err = json.Unmarshal(responseBody, &aliResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
if aliResponse.Code != "" {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: aliResponse.Message,
Type: aliResponse.Code,
Param: aliResponse.RequestId,
Code: aliResponse.Code,
},
StatusCode: resp.StatusCode,
}, nil
return types.WithOpenAIError(types.OpenAIError{
Message: aliResponse.Message,
Type: aliResponse.Code,
Param: aliResponse.RequestId,
Code: aliResponse.Code,
}, resp.StatusCode), nil
}
usage := dto.Usage{
@@ -70,14 +65,10 @@ func RerankHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayI
jsonResponse, err := json.Marshal(rerankResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, err = c.Writer.Write(jsonResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "write_response_body_failed", http.StatusInternalServerError), nil
}
c.Writer.Write(jsonResponse)
return nil, &usage
}

View File

@@ -8,9 +8,10 @@ import (
"one-api/common"
"one-api/dto"
"one-api/relay/helper"
"one-api/service"
"strings"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -38,42 +39,26 @@ func embeddingRequestOpenAI2Ali(request dto.EmbeddingRequest) *AliEmbeddingReque
}
}
func aliEmbeddingHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
var aliResponse AliEmbeddingResponse
err := json.NewDecoder(resp.Body).Decode(&aliResponse)
func aliEmbeddingHandler(c *gin.Context, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
var fullTextResponse dto.OpenAIEmbeddingResponse
err := json.NewDecoder(resp.Body).Decode(&fullTextResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
if aliResponse.Code != "" {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: aliResponse.Message,
Type: aliResponse.Code,
Param: aliResponse.RequestId,
Code: aliResponse.Code,
},
StatusCode: resp.StatusCode,
}, nil
}
common.CloseResponseBodyGracefully(resp)
model := c.GetString("model")
if model == "" {
model = "text-embedding-v4"
}
fullTextResponse := embeddingResponseAli2OpenAI(&aliResponse, model)
jsonResponse, err := json.Marshal(fullTextResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, err = c.Writer.Write(jsonResponse)
c.Writer.Write(jsonResponse)
return nil, &fullTextResponse.Usage
}
@@ -135,7 +120,7 @@ func streamResponseAli2OpenAI(aliResponse *AliResponse) *dto.ChatCompletionsStre
return &response
}
func aliStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func aliStreamHandler(c *gin.Context, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
var usage dto.Usage
scanner := bufio.NewScanner(resp.Body)
scanner.Split(bufio.ScanLines)
@@ -186,42 +171,33 @@ func aliStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWith
return false
}
})
err := resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
common.CloseResponseBodyGracefully(resp)
return nil, &usage
}
func aliHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func aliHandler(c *gin.Context, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
var aliResponse AliResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeReadResponseBodyFailed), nil
}
common.CloseResponseBodyGracefully(resp)
err = json.Unmarshal(responseBody, &aliResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
if aliResponse.Code != "" {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: aliResponse.Message,
Type: aliResponse.Code,
Param: aliResponse.RequestId,
Code: aliResponse.Code,
},
StatusCode: resp.StatusCode,
}, nil
return types.WithOpenAIError(types.OpenAIError{
Message: aliResponse.Message,
Type: "ali_error",
Param: aliResponse.RequestId,
Code: aliResponse.Code,
}, resp.StatusCode), nil
}
fullTextResponse := responseAli2OpenAI(&aliResponse)
jsonResponse, err := json.Marshal(fullTextResponse)
jsonResponse, err := common.Marshal(fullTextResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)

View File

@@ -206,8 +206,8 @@ func sendPingData(c *gin.Context, mutex *sync.Mutex) error {
func doRequest(c *gin.Context, req *http.Request, info *common.RelayInfo) (*http.Response, error) {
var client *http.Client
var err error
if proxyURL, ok := info.ChannelSetting["proxy"]; ok {
client, err = service.NewProxyHttpClient(proxyURL.(string))
if info.ChannelSetting.Proxy != "" {
client, err = service.NewProxyHttpClient(info.ChannelSetting.Proxy)
if err != nil {
return nil, fmt.Errorf("new proxy http client failed: %w", err)
}

View File

@@ -8,6 +8,7 @@ import (
"one-api/relay/channel/claude"
relaycommon "one-api/relay/common"
"one-api/setting/model_setting"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -84,7 +85,7 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return nil, nil
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.IsStream {
err, usage = awsStreamHandler(c, resp, info, a.RequestMode)
} else {

View File

@@ -3,19 +3,22 @@ package aws
import (
"encoding/json"
"fmt"
"github.com/gin-gonic/gin"
"github.com/pkg/errors"
"net/http"
"one-api/common"
"one-api/dto"
"one-api/relay/channel/claude"
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
"github.com/pkg/errors"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/bedrockruntime"
"github.com/aws/aws-sdk-go-v2/service/bedrockruntime/types"
bedrockruntimeTypes "github.com/aws/aws-sdk-go-v2/service/bedrockruntime/types"
)
func newAwsClient(c *gin.Context, info *relaycommon.RelayInfo) (*bedrockruntime.Client, error) {
@@ -65,24 +68,21 @@ func awsModelCrossRegion(awsModelId, awsRegionPrefix string) string {
return modelPrefix + "." + awsModelId
}
func awsModelID(requestModel string) (string, error) {
func awsModelID(requestModel string) string {
if awsModelID, ok := awsModelIDMap[requestModel]; ok {
return awsModelID, nil
return awsModelID
}
return requestModel, nil
return requestModel
}
func awsHandler(c *gin.Context, info *relaycommon.RelayInfo, requestMode int) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func awsHandler(c *gin.Context, info *relaycommon.RelayInfo, requestMode int) (*types.NewAPIError, *dto.Usage) {
awsCli, err := newAwsClient(c, info)
if err != nil {
return wrapErr(errors.Wrap(err, "newAwsClient")), nil
return types.NewError(err, types.ErrorCodeChannelAwsClientError), nil
}
awsModelId, err := awsModelID(c.GetString("request_model"))
if err != nil {
return wrapErr(errors.Wrap(err, "awsModelID")), nil
}
awsModelId := awsModelID(c.GetString("request_model"))
awsRegionPrefix := awsRegionPrefix(awsCli.Options().Region)
canCrossRegion := awsModelCanCrossRegion(awsModelId, awsRegionPrefix)
@@ -98,42 +98,42 @@ func awsHandler(c *gin.Context, info *relaycommon.RelayInfo, requestMode int) (*
claudeReq_, ok := c.Get("converted_request")
if !ok {
return wrapErr(errors.New("request not found")), nil
return types.NewError(errors.New("aws claude request not found"), types.ErrorCodeInvalidRequest), nil
}
claudeReq := claudeReq_.(*dto.ClaudeRequest)
awsClaudeReq := copyRequest(claudeReq)
awsReq.Body, err = json.Marshal(awsClaudeReq)
if err != nil {
return wrapErr(errors.Wrap(err, "marshal request")), nil
return types.NewError(errors.Wrap(err, "marshal request"), types.ErrorCodeBadResponseBody), nil
}
awsResp, err := awsCli.InvokeModel(c.Request.Context(), awsReq)
if err != nil {
return wrapErr(errors.Wrap(err, "InvokeModel")), nil
return types.NewError(errors.Wrap(err, "InvokeModel"), types.ErrorCodeChannelAwsClientError), nil
}
claudeInfo := &claude.ClaudeResponseInfo{
ResponseId: fmt.Sprintf("chatcmpl-%s", common.GetUUID()),
ResponseId: helper.GetResponseID(c),
Created: common.GetTimestamp(),
Model: info.UpstreamModelName,
ResponseText: strings.Builder{},
Usage: &dto.Usage{},
}
claude.HandleClaudeResponseData(c, info, claudeInfo, awsResp.Body, RequestModeMessage)
handlerErr := claude.HandleClaudeResponseData(c, info, claudeInfo, awsResp.Body, RequestModeMessage)
if handlerErr != nil {
return handlerErr, nil
}
return nil, claudeInfo.Usage
}
func awsStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo, requestMode int) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func awsStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo, requestMode int) (*types.NewAPIError, *dto.Usage) {
awsCli, err := newAwsClient(c, info)
if err != nil {
return wrapErr(errors.Wrap(err, "newAwsClient")), nil
return types.NewError(err, types.ErrorCodeChannelAwsClientError), nil
}
awsModelId, err := awsModelID(c.GetString("request_model"))
if err != nil {
return wrapErr(errors.Wrap(err, "awsModelID")), nil
}
awsModelId := awsModelID(c.GetString("request_model"))
awsRegionPrefix := awsRegionPrefix(awsCli.Options().Region)
canCrossRegion := awsModelCanCrossRegion(awsModelId, awsRegionPrefix)
@@ -149,25 +149,25 @@ func awsStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
claudeReq_, ok := c.Get("converted_request")
if !ok {
return wrapErr(errors.New("request not found")), nil
return types.NewError(errors.New("aws claude request not found"), types.ErrorCodeInvalidRequest), nil
}
claudeReq := claudeReq_.(*dto.ClaudeRequest)
awsClaudeReq := copyRequest(claudeReq)
awsReq.Body, err = json.Marshal(awsClaudeReq)
if err != nil {
return wrapErr(errors.Wrap(err, "marshal request")), nil
return types.NewError(errors.Wrap(err, "marshal request"), types.ErrorCodeBadResponseBody), nil
}
awsResp, err := awsCli.InvokeModelWithResponseStream(c.Request.Context(), awsReq)
if err != nil {
return wrapErr(errors.Wrap(err, "InvokeModelWithResponseStream")), nil
return types.NewError(errors.Wrap(err, "InvokeModelWithResponseStream"), types.ErrorCodeChannelAwsClientError), nil
}
stream := awsResp.GetStream()
defer stream.Close()
claudeInfo := &claude.ClaudeResponseInfo{
ResponseId: fmt.Sprintf("chatcmpl-%s", common.GetUUID()),
ResponseId: helper.GetResponseID(c),
Created: common.GetTimestamp(),
Model: info.UpstreamModelName,
ResponseText: strings.Builder{},
@@ -176,18 +176,18 @@ func awsStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
for event := range stream.Events() {
switch v := event.(type) {
case *types.ResponseStreamMemberChunk:
case *bedrockruntimeTypes.ResponseStreamMemberChunk:
info.SetFirstResponseTime()
respErr := claude.HandleStreamResponseData(c, info, claudeInfo, string(v.Value.Bytes), RequestModeMessage)
if respErr != nil {
return respErr, nil
}
case *types.UnknownUnionMember:
case *bedrockruntimeTypes.UnknownUnionMember:
fmt.Println("unknown tag:", v.Tag)
return wrapErr(errors.New("unknown response type")), nil
return types.NewError(errors.New("unknown response type"), types.ErrorCodeInvalidRequest), nil
default:
fmt.Println("union is nil or unknown type")
return wrapErr(errors.New("nil or unknown response type")), nil
return types.NewError(errors.New("nil or unknown response type"), types.ErrorCodeInvalidRequest), nil
}
}

View File

@@ -9,6 +9,7 @@ import (
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
@@ -140,15 +141,15 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.IsStream {
err, usage = baiduStreamHandler(c, resp)
err, usage = baiduStreamHandler(c, info, resp)
} else {
switch info.RelayMode {
case constant.RelayModeEmbeddings:
err, usage = baiduEmbeddingHandler(c, resp)
err, usage = baiduEmbeddingHandler(c, info, resp)
default:
err, usage = baiduHandler(c, resp)
err, usage = baiduHandler(c, info, resp)
}
}
return

View File

@@ -1,21 +1,23 @@
package baidu
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/common"
"one-api/constant"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"one-api/types"
"strings"
"sync"
"time"
"github.com/gin-gonic/gin"
)
// https://cloud.baidu.com/doc/WENXINWORKSHOP/s/flfmc9do2
@@ -110,98 +112,49 @@ func embeddingResponseBaidu2OpenAI(response *BaiduEmbeddingResponse) *dto.OpenAI
return &openAIEmbeddingResponse
}
func baiduStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
var usage dto.Usage
scanner := bufio.NewScanner(resp.Body)
scanner.Split(func(data []byte, atEOF bool) (advance int, token []byte, err error) {
if atEOF && len(data) == 0 {
return 0, nil, nil
}
if i := strings.Index(string(data), "\n"); i >= 0 {
return i + 1, data[0:i], nil
}
if atEOF {
return len(data), data, nil
}
return 0, nil, nil
})
dataChan := make(chan string)
stopChan := make(chan bool)
go func() {
for scanner.Scan() {
data := scanner.Text()
if len(data) < 6 { // ignore blank line or wrong format
continue
}
data = data[6:]
dataChan <- data
}
stopChan <- true
}()
helper.SetEventStreamHeaders(c)
c.Stream(func(w io.Writer) bool {
select {
case data := <-dataChan:
var baiduResponse BaiduChatStreamResponse
err := json.Unmarshal([]byte(data), &baiduResponse)
if err != nil {
common.SysError("error unmarshalling stream response: " + err.Error())
return true
}
if baiduResponse.Usage.TotalTokens != 0 {
usage.TotalTokens = baiduResponse.Usage.TotalTokens
usage.PromptTokens = baiduResponse.Usage.PromptTokens
usage.CompletionTokens = baiduResponse.Usage.TotalTokens - baiduResponse.Usage.PromptTokens
}
response := streamResponseBaidu2OpenAI(&baiduResponse)
jsonResponse, err := json.Marshal(response)
if err != nil {
common.SysError("error marshalling stream response: " + err.Error())
return true
}
c.Render(-1, common.CustomEvent{Data: "data: " + string(jsonResponse)})
func baiduStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
usage := &dto.Usage{}
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
var baiduResponse BaiduChatStreamResponse
err := common.Unmarshal([]byte(data), &baiduResponse)
if err != nil {
common.SysError("error unmarshalling stream response: " + err.Error())
return true
case <-stopChan:
c.Render(-1, common.CustomEvent{Data: "data: [DONE]"})
return false
}
if baiduResponse.Usage.TotalTokens != 0 {
usage.TotalTokens = baiduResponse.Usage.TotalTokens
usage.PromptTokens = baiduResponse.Usage.PromptTokens
usage.CompletionTokens = baiduResponse.Usage.TotalTokens - baiduResponse.Usage.PromptTokens
}
response := streamResponseBaidu2OpenAI(&baiduResponse)
err = helper.ObjectData(c, response)
if err != nil {
common.SysError("error sending stream response: " + err.Error())
}
return true
})
err := resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
return nil, &usage
common.CloseResponseBodyGracefully(resp)
return nil, usage
}
func baiduHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func baiduHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
var baiduResponse BaiduChatResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
common.CloseResponseBodyGracefully(resp)
err = json.Unmarshal(responseBody, &baiduResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
if baiduResponse.ErrorMsg != "" {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: baiduResponse.ErrorMsg,
Type: "baidu_error",
Param: "",
Code: baiduResponse.ErrorCode,
},
StatusCode: resp.StatusCode,
}, nil
return types.NewError(fmt.Errorf(baiduResponse.ErrorMsg), types.ErrorCodeBadResponseBody), nil
}
fullTextResponse := responseBaidu2OpenAI(&baiduResponse)
jsonResponse, err := json.Marshal(fullTextResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
@@ -209,35 +162,24 @@ func baiduHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStat
return nil, &fullTextResponse.Usage
}
func baiduEmbeddingHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func baiduEmbeddingHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
var baiduResponse BaiduEmbeddingResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
common.CloseResponseBodyGracefully(resp)
err = json.Unmarshal(responseBody, &baiduResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
if baiduResponse.ErrorMsg != "" {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: baiduResponse.ErrorMsg,
Type: "baidu_error",
Param: "",
Code: baiduResponse.ErrorCode,
},
StatusCode: resp.StatusCode,
}, nil
return types.NewError(fmt.Errorf(baiduResponse.ErrorMsg), types.ErrorCodeBadResponseBody), nil
}
fullTextResponse := embeddingResponseBaidu2OpenAI(&baiduResponse)
jsonResponse, err := json.Marshal(fullTextResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
@@ -280,7 +222,7 @@ func getBaiduAccessTokenHelper(apiKey string) (*BaiduAccessToken, error) {
}
req.Header.Add("Content-Type", "application/json")
req.Header.Add("Accept", "application/json")
res, err := service.GetImpatientHttpClient().Do(req)
res, err := service.GetHttpClient().Do(req)
if err != nil {
return nil, err
}

View File

@@ -9,6 +9,7 @@ import (
"one-api/relay/channel"
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
@@ -42,7 +43,16 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Header, info *relaycommon.RelayInfo) error {
channel.SetupApiRequestHeader(info, c, req)
req.Set("Authorization", "Bearer "+info.ApiKey)
keyParts := strings.Split(info.ApiKey, "|")
if len(keyParts) == 0 || keyParts[0] == "" {
return errors.New("invalid API key: authorization token is required")
}
if len(keyParts) > 1 {
if keyParts[1] != "" {
req.Set("appid", keyParts[1])
}
}
req.Set("Authorization", "Bearer "+keyParts[0])
return nil
}
@@ -83,11 +93,11 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.IsStream {
err, usage = openai.OaiStreamHandler(c, resp, info)
usage, err = openai.OaiStreamHandler(c, info, resp)
} else {
err, usage = openai.OpenaiHandler(c, resp, info)
usage, err = openai.OpenaiHandler(c, info, resp)
}
return
}

View File

@@ -9,6 +9,7 @@ import (
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/setting/model_setting"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
@@ -94,7 +95,7 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.IsStream {
err, usage = ClaudeStreamHandler(c, resp, info, a.RequestMode)
} else {

View File

@@ -12,6 +12,7 @@ import (
"one-api/relay/helper"
"one-api/service"
"one-api/setting/model_setting"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
@@ -125,7 +126,7 @@ func RequestOpenAI2ClaudeMessage(textRequest dto.GeneralOpenAIRequest) (*dto.Cla
if textRequest.Reasoning != nil {
var reasoning openrouter.RequestReasoning
if err := common.DecodeJson(textRequest.Reasoning, &reasoning); err != nil {
if err := common.Unmarshal(textRequest.Reasoning, &reasoning); err != nil {
return nil, err
}
@@ -517,22 +518,15 @@ func FormatClaudeResponseInfo(requestMode int, claudeResponse *dto.ClaudeRespons
return true
}
func HandleStreamResponseData(c *gin.Context, info *relaycommon.RelayInfo, claudeInfo *ClaudeResponseInfo, data string, requestMode int) *dto.OpenAIErrorWithStatusCode {
func HandleStreamResponseData(c *gin.Context, info *relaycommon.RelayInfo, claudeInfo *ClaudeResponseInfo, data string, requestMode int) *types.NewAPIError {
var claudeResponse dto.ClaudeResponse
err := common.DecodeJsonStr(data, &claudeResponse)
err := common.UnmarshalJsonStr(data, &claudeResponse)
if err != nil {
common.SysError("error unmarshalling stream response: " + err.Error())
return service.OpenAIErrorWrapper(err, "stream_response_error", http.StatusInternalServerError)
return types.NewError(err, types.ErrorCodeBadResponseBody)
}
if claudeResponse.Error != nil && claudeResponse.Error.Type != "" {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Code: "stream_response_error",
Type: claudeResponse.Error.Type,
Message: claudeResponse.Error.Message,
},
StatusCode: http.StatusInternalServerError,
}
return types.WithClaudeError(*claudeResponse.Error, http.StatusInternalServerError)
}
if info.RelayFormat == relaycommon.RelayFormatClaude {
FormatClaudeResponseInfo(requestMode, &claudeResponse, nil, claudeInfo)
@@ -593,15 +587,15 @@ func HandleStreamFinalResponse(c *gin.Context, info *relaycommon.RelayInfo, clau
}
}
func ClaudeStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo, requestMode int) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func ClaudeStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo, requestMode int) (*types.NewAPIError, *dto.Usage) {
claudeInfo := &ClaudeResponseInfo{
ResponseId: fmt.Sprintf("chatcmpl-%s", common.GetUUID()),
ResponseId: helper.GetResponseID(c),
Created: common.GetTimestamp(),
Model: info.UpstreamModelName,
ResponseText: strings.Builder{},
Usage: &dto.Usage{},
}
var err *dto.OpenAIErrorWithStatusCode
var err *types.NewAPIError
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
err = HandleStreamResponseData(c, info, claudeInfo, data, requestMode)
if err != nil {
@@ -617,21 +611,14 @@ func ClaudeStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.
return nil, claudeInfo.Usage
}
func HandleClaudeResponseData(c *gin.Context, info *relaycommon.RelayInfo, claudeInfo *ClaudeResponseInfo, data []byte, requestMode int) *dto.OpenAIErrorWithStatusCode {
func HandleClaudeResponseData(c *gin.Context, info *relaycommon.RelayInfo, claudeInfo *ClaudeResponseInfo, data []byte, requestMode int) *types.NewAPIError {
var claudeResponse dto.ClaudeResponse
err := common.DecodeJson(data, &claudeResponse)
err := common.Unmarshal(data, &claudeResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_claude_response_failed", http.StatusInternalServerError)
return types.NewError(err, types.ErrorCodeBadResponseBody)
}
if claudeResponse.Error != nil && claudeResponse.Error.Type != "" {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: claudeResponse.Error.Message,
Type: claudeResponse.Error.Type,
Code: claudeResponse.Error.Type,
},
StatusCode: http.StatusInternalServerError,
}
return types.WithClaudeError(*claudeResponse.Error, http.StatusInternalServerError)
}
if requestMode == RequestModeCompletion {
completionTokens := service.CountTextToken(claudeResponse.Completion, info.OriginModelName)
@@ -652,20 +639,21 @@ func HandleClaudeResponseData(c *gin.Context, info *relaycommon.RelayInfo, claud
openaiResponse.Usage = *claudeInfo.Usage
responseData, err = json.Marshal(openaiResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError)
return types.NewError(err, types.ErrorCodeBadResponseBody)
}
case relaycommon.RelayFormatClaude:
responseData = data
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(http.StatusOK)
_, err = c.Writer.Write(responseData)
common.IOCopyBytesGracefully(c, nil, responseData)
return nil
}
func ClaudeHandler(c *gin.Context, resp *http.Response, requestMode int, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func ClaudeHandler(c *gin.Context, resp *http.Response, requestMode int, info *relaycommon.RelayInfo) (*types.NewAPIError, *dto.Usage) {
defer common.CloseResponseBodyGracefully(resp)
claudeInfo := &ClaudeResponseInfo{
ResponseId: fmt.Sprintf("chatcmpl-%s", common.GetUUID()),
ResponseId: helper.GetResponseID(c),
Created: common.GetTimestamp(),
Model: info.UpstreamModelName,
ResponseText: strings.Builder{},
@@ -673,9 +661,8 @@ func ClaudeHandler(c *gin.Context, resp *http.Response, requestMode int, info *r
}
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
resp.Body.Close()
if common.DebugEnabled {
println("responseBody: ", string(responseBody))
}

View File

@@ -10,6 +10,7 @@ import (
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -94,20 +95,20 @@ func (a *Adaptor) ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInf
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
switch info.RelayMode {
case constant.RelayModeEmbeddings:
fallthrough
case constant.RelayModeChatCompletions:
if info.IsStream {
err, usage = cfStreamHandler(c, resp, info)
err, usage = cfStreamHandler(c, info, resp)
} else {
err, usage = cfHandler(c, resp, info)
err, usage = cfHandler(c, info, resp)
}
case constant.RelayModeAudioTranslation:
fallthrough
case constant.RelayModeAudioTranscription:
err, usage = cfSTTHandler(c, resp, info)
err, usage = cfSTTHandler(c, info, resp)
}
return
}

View File

@@ -3,7 +3,6 @@ package cloudflare
import (
"bufio"
"encoding/json"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/common"
@@ -11,8 +10,11 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"one-api/types"
"strings"
"time"
"github.com/gin-gonic/gin"
)
func convertCf2CompletionsRequest(textRequest dto.GeneralOpenAIRequest) *CfRequest {
@@ -25,7 +27,7 @@ func convertCf2CompletionsRequest(textRequest dto.GeneralOpenAIRequest) *CfReque
}
}
func cfStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func cfStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
scanner := bufio.NewScanner(resp.Body)
scanner.Split(bufio.ScanLines)
@@ -81,27 +83,21 @@ func cfStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rela
}
helper.Done(c)
err := resp.Body.Close()
if err != nil {
common.LogError(c, "close_response_body_failed: "+err.Error())
}
common.CloseResponseBodyGracefully(resp)
return nil, usage
}
func cfHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func cfHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "close_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
common.CloseResponseBodyGracefully(resp)
var response dto.TextResponse
err = json.Unmarshal(responseBody, &response)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
response.Model = info.UpstreamModelName
var responseText string
@@ -113,7 +109,7 @@ func cfHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo)
response.Id = helper.GetResponseID(c)
jsonResponse, err := json.Marshal(response)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
@@ -121,19 +117,16 @@ func cfHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo)
return nil, usage
}
func cfSTTHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func cfSTTHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
var cfResp CfAudioResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
common.CloseResponseBodyGracefully(resp)
err = json.Unmarshal(responseBody, &cfResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
audioResp := &dto.AudioResponse{
@@ -142,7 +135,7 @@ func cfSTTHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayIn
jsonResponse, err := json.Marshal(audioResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)

View File

@@ -9,6 +9,7 @@ import (
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -71,14 +72,14 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.RelayMode == constant.RelayModeRerank {
err, usage = cohereRerankHandler(c, resp, info)
} else {
if info.IsStream {
err, usage = cohereStreamHandler(c, resp, info)
err, usage = cohereStreamHandler(c, info, resp)
} else {
err, usage = cohereHandler(c, resp, info.UpstreamModelName, info.PromptTokens)
err, usage = cohereHandler(c, info, resp)
}
}
return

View File

@@ -3,7 +3,6 @@ package cohere
import (
"bufio"
"encoding/json"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/common"
@@ -11,8 +10,11 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"one-api/types"
"strings"
"time"
"github.com/gin-gonic/gin"
)
func requestOpenAI2Cohere(textRequest dto.GeneralOpenAIRequest) *CohereRequest {
@@ -76,7 +78,7 @@ func stopReasonCohere2OpenAI(reason string) string {
}
}
func cohereStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func cohereStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
responseId := helper.GetResponseID(c)
createdTime := common.GetTimestamp()
usage := &dto.Usage{}
@@ -167,20 +169,17 @@ func cohereStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.
return nil, usage
}
func cohereHandler(c *gin.Context, resp *http.Response, modelName string, promptTokens int) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func cohereHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
createdTime := common.GetTimestamp()
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
common.CloseResponseBodyGracefully(resp)
var cohereResp CohereResponseResult
err = json.Unmarshal(responseBody, &cohereResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
usage := dto.Usage{}
usage.PromptTokens = cohereResp.Meta.BilledUnits.InputTokens
@@ -191,7 +190,7 @@ func cohereHandler(c *gin.Context, resp *http.Response, modelName string, prompt
openaiResp.Id = cohereResp.ResponseId
openaiResp.Created = createdTime
openaiResp.Object = "chat.completion"
openaiResp.Model = modelName
openaiResp.Model = info.UpstreamModelName
openaiResp.Usage = usage
openaiResp.Choices = []dto.OpenAITextResponseChoice{
@@ -204,7 +203,7 @@ func cohereHandler(c *gin.Context, resp *http.Response, modelName string, prompt
jsonResponse, err := json.Marshal(openaiResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
@@ -212,19 +211,16 @@ func cohereHandler(c *gin.Context, resp *http.Response, modelName string, prompt
return nil, &usage
}
func cohereRerankHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func cohereRerankHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*types.NewAPIError, *dto.Usage) {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
common.CloseResponseBodyGracefully(resp)
var cohereResp CohereRerankResponseResult
err = json.Unmarshal(responseBody, &cohereResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
usage := dto.Usage{}
if cohereResp.Meta.BilledUnits.InputTokens == 0 {
@@ -243,7 +239,7 @@ func cohereRerankHandler(c *gin.Context, resp *http.Response, info *relaycommon.
jsonResponse, err := json.Marshal(rerankResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)

View File

@@ -9,6 +9,7 @@ import (
"one-api/dto"
"one-api/relay/channel"
"one-api/relay/common"
"one-api/types"
"time"
"github.com/gin-gonic/gin"
@@ -95,11 +96,11 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *common.RelayInfo, requestBody
}
// DoResponse implements channel.Adaptor.
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *common.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *common.RelayInfo) (usage any, err *types.NewAPIError) {
if info.IsStream {
err, usage = cozeChatStreamHandler(c, resp, info)
err, usage = cozeChatStreamHandler(c, info, resp)
} else {
err, usage = cozeChatHandler(c, resp, info)
err, usage = cozeChatHandler(c, info, resp)
}
return
}

View File

@@ -12,6 +12,7 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
@@ -43,25 +44,22 @@ func convertCozeChatRequest(c *gin.Context, request dto.GeneralOpenAIRequest) *C
return cozeRequest
}
func cozeChatHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func cozeChatHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "close_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
common.CloseResponseBodyGracefully(resp)
// convert coze response to openai response
var response dto.TextResponse
var cozeResponse CozeChatDetailResponse
response.Model = info.UpstreamModelName
err = json.Unmarshal(responseBody, &cozeResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
if cozeResponse.Code != 0 {
return service.OpenAIErrorWrapper(errors.New(cozeResponse.Msg), fmt.Sprintf("%d", cozeResponse.Code), http.StatusInternalServerError), nil
return types.NewError(errors.New(cozeResponse.Msg), types.ErrorCodeBadResponseBody), nil
}
// 从上下文获取 usage
var usage dto.Usage
@@ -88,7 +86,7 @@ func cozeChatHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rela
}
jsonResponse, err := json.Marshal(response)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
@@ -97,7 +95,7 @@ func cozeChatHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rela
return nil, &usage
}
func cozeChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func cozeChatStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*types.NewAPIError, *dto.Usage) {
scanner := bufio.NewScanner(resp.Body)
scanner.Split(bufio.ScanLines)
helper.SetEventStreamHeaders(c)
@@ -138,7 +136,7 @@ func cozeChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycommo
}
if err := scanner.Err(); err != nil {
return service.OpenAIErrorWrapper(err, "stream_scanner_error", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeBadResponseBody), nil
}
helper.Done(c)
@@ -281,8 +279,8 @@ func getChatDetail(a *Adaptor, c *gin.Context, info *relaycommon.RelayInfo) (*ht
func doRequest(req *http.Request, info *relaycommon.RelayInfo) (*http.Response, error) {
var client *http.Client
var err error // 声明 err 变量
if proxyURL, ok := info.ChannelSetting["proxy"]; ok {
client, err = service.NewProxyHttpClient(proxyURL.(string))
if info.ChannelSetting.Proxy != "" {
client, err = service.NewProxyHttpClient(info.ChannelSetting.Proxy)
if err != nil {
return nil, fmt.Errorf("new proxy http client failed: %w", err)
}

View File

@@ -10,6 +10,7 @@ import (
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
@@ -81,11 +82,11 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.IsStream {
err, usage = openai.OaiStreamHandler(c, resp, info)
usage, err = openai.OaiStreamHandler(c, info, resp)
} else {
err, usage = openai.OpenaiHandler(c, resp, info)
usage, err = openai.OpenaiHandler(c, info, resp)
}
return
}

View File

@@ -8,6 +8,7 @@ import (
"one-api/dto"
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -96,11 +97,11 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.IsStream {
err, usage = difyStreamHandler(c, resp, info)
return difyStreamHandler(c, info, resp)
} else {
err, usage = difyHandler(c, resp, info)
return difyHandler(c, info, resp)
}
return
}

View File

@@ -14,6 +14,7 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"one-api/types"
"os"
"strings"
@@ -95,7 +96,7 @@ func uploadDifyFile(c *gin.Context, info *relaycommon.RelayInfo, user string, me
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", info.ApiKey))
// Send request
client := service.GetImpatientHttpClient()
client := service.GetHttpClient()
resp, err := client.Do(req)
if err != nil {
common.SysError("failed to send request: " + err.Error())
@@ -209,7 +210,7 @@ func streamResponseDify2OpenAI(difyResponse DifyChunkChatCompletionResponse) *dt
return &response
}
func difyStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func difyStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
var responseText string
usage := &dto.Usage{}
var nodeToken int
@@ -247,23 +248,20 @@ func difyStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Re
usage = service.ResponseText2Usage(responseText, info.UpstreamModelName, info.PromptTokens)
}
usage.CompletionTokens += nodeToken
return nil, usage
return usage, nil
}
func difyHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func difyHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
var difyResponse DifyChatCompletionResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
common.CloseResponseBodyGracefully(resp)
err = json.Unmarshal(responseBody, &difyResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
fullTextResponse := dto.OpenAITextResponse{
Id: difyResponse.ConversationId,
@@ -282,10 +280,10 @@ func difyHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInf
fullTextResponse.Choices = append(fullTextResponse.Choices, choice)
jsonResponse, err := json.Marshal(fullTextResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, err = c.Writer.Write(jsonResponse)
return nil, &difyResponse.MetaData.Usage
c.Writer.Write(jsonResponse)
return &difyResponse.MetaData.Usage, nil
}

View File

@@ -11,8 +11,8 @@ import (
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"one-api/service"
"one-api/setting/model_setting"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
@@ -168,30 +168,30 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.RelayMode == constant.RelayModeGemini {
if info.IsStream {
return GeminiTextGenerationStreamHandler(c, resp, info)
return GeminiTextGenerationStreamHandler(c, info, resp)
} else {
return GeminiTextGenerationHandler(c, resp, info)
return GeminiTextGenerationHandler(c, info, resp)
}
}
if strings.HasPrefix(info.UpstreamModelName, "imagen") {
return GeminiImageHandler(c, resp, info)
return GeminiImageHandler(c, info, resp)
}
// check if the model is an embedding model
if strings.HasPrefix(info.UpstreamModelName, "text-embedding") ||
strings.HasPrefix(info.UpstreamModelName, "embedding") ||
strings.HasPrefix(info.UpstreamModelName, "gemini-embedding") {
return GeminiEmbeddingHandler(c, resp, info)
return GeminiEmbeddingHandler(c, info, resp)
}
if info.IsStream {
err, usage = GeminiChatStreamHandler(c, resp, info)
return GeminiChatStreamHandler(c, info, resp)
} else {
err, usage = GeminiChatHandler(c, resp, info)
return GeminiChatHandler(c, info, resp)
}
//if usage.(*dto.Usage).CompletionTokenDetails.ReasoningTokens > 100 {
@@ -205,23 +205,23 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
// }
//}
return
return nil, types.NewError(errors.New("not implemented"), types.ErrorCodeBadResponseBody)
}
func GeminiImageHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func GeminiImageHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
responseBody, readErr := io.ReadAll(resp.Body)
if readErr != nil {
return nil, service.OpenAIErrorWrapper(readErr, "read_response_body_failed", http.StatusInternalServerError)
return nil, types.NewError(readErr, types.ErrorCodeBadResponseBody)
}
_ = resp.Body.Close()
var geminiResponse GeminiImageResponse
if jsonErr := json.Unmarshal(responseBody, &geminiResponse); jsonErr != nil {
return nil, service.OpenAIErrorWrapper(jsonErr, "unmarshal_response_body_failed", http.StatusInternalServerError)
return nil, types.NewError(jsonErr, types.ErrorCodeBadResponseBody)
}
if len(geminiResponse.Predictions) == 0 {
return nil, service.OpenAIErrorWrapper(errors.New("no images generated"), "no_images", http.StatusBadRequest)
return nil, types.NewError(errors.New("no images generated"), types.ErrorCodeBadResponseBody)
}
// convert to openai format response
@@ -241,7 +241,7 @@ func GeminiImageHandler(c *gin.Context, resp *http.Response, info *relaycommon.R
jsonResponse, jsonErr := json.Marshal(openAIResponse)
if jsonErr != nil {
return nil, service.OpenAIErrorWrapper(jsonErr, "marshal_response_failed", http.StatusInternalServerError)
return nil, types.NewError(jsonErr, types.ErrorCodeBadResponseBody)
}
c.Writer.Header().Set("Content-Type", "application/json")
@@ -253,7 +253,7 @@ func GeminiImageHandler(c *gin.Context, resp *http.Response, info *relaycommon.R
const imageTokens = 258
generatedImages := len(openAIResponse.Data)
usage = &dto.Usage{
usage := &dto.Usage{
PromptTokens: imageTokens * generatedImages, // each generated image has fixed 258 tokens
CompletionTokens: 0, // image generation does not calculate completion tokens
TotalTokens: imageTokens * generatedImages,

View File

@@ -1,7 +1,6 @@
package gemini
import (
"encoding/json"
"io"
"net/http"
"one-api/common"
@@ -9,20 +8,19 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
)
func GeminiTextGenerationHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.Usage, *dto.OpenAIErrorWithStatusCode) {
func GeminiTextGenerationHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
defer common.CloseResponseBodyGracefully(resp)
// 读取响应体
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return nil, service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError)
}
err = resp.Body.Close()
if err != nil {
return nil, service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError)
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
if common.DebugEnabled {
@@ -31,9 +29,9 @@ func GeminiTextGenerationHandler(c *gin.Context, resp *http.Response, info *rela
// 解析为 Gemini 原生响应格式
var geminiResponse GeminiChatResponse
err = common.DecodeJson(responseBody, &geminiResponse)
err = common.Unmarshal(responseBody, &geminiResponse)
if err != nil {
return nil, service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError)
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
// 计算使用量(基于 UsageMetadata
@@ -54,23 +52,17 @@ func GeminiTextGenerationHandler(c *gin.Context, resp *http.Response, info *rela
}
// 直接返回 Gemini 原生格式的 JSON 响应
jsonResponse, err := json.Marshal(geminiResponse)
jsonResponse, err := common.Marshal(geminiResponse)
if err != nil {
return nil, service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError)
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
// 设置响应头并写入响应
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, err = c.Writer.Write(jsonResponse)
if err != nil {
return nil, service.OpenAIErrorWrapper(err, "write_response_failed", http.StatusInternalServerError)
}
common.IOCopyBytesGracefully(c, resp, jsonResponse)
return &usage, nil
}
func GeminiTextGenerationStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.Usage, *dto.OpenAIErrorWithStatusCode) {
func GeminiTextGenerationStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
var usage = &dto.Usage{}
var imageCount int
@@ -80,7 +72,7 @@ func GeminiTextGenerationStreamHandler(c *gin.Context, resp *http.Response, info
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
var geminiResponse GeminiChatResponse
err := common.DecodeJsonStr(data, &geminiResponse)
err := common.UnmarshalJsonStr(data, &geminiResponse)
if err != nil {
common.LogError(c, "error unmarshalling stream response: "+err.Error())
return false

View File

@@ -2,6 +2,7 @@ package gemini
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
@@ -12,6 +13,7 @@ import (
"one-api/relay/helper"
"one-api/service"
"one-api/setting/model_setting"
"one-api/types"
"strconv"
"strings"
"unicode/utf8"
@@ -792,7 +794,7 @@ func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.C
return &response, isStop, hasImage
}
func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func GeminiChatStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
// responseText := ""
id := helper.GetResponseID(c)
createAt := common.GetTimestamp()
@@ -801,7 +803,7 @@ func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycom
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
var geminiResponse GeminiChatResponse
err := common.DecodeJsonStr(data, &geminiResponse)
err := common.UnmarshalJsonStr(data, &geminiResponse)
if err != nil {
common.LogError(c, "error unmarshalling stream response: "+err.Error())
return false
@@ -858,36 +860,25 @@ func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycom
}
helper.Done(c)
//resp.Body.Close()
return nil, usage
return usage, nil
}
func GeminiChatHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func GeminiChatHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
common.CloseResponseBodyGracefully(resp)
if common.DebugEnabled {
println(string(responseBody))
}
var geminiResponse GeminiChatResponse
err = common.DecodeJson(responseBody, &geminiResponse)
err = common.Unmarshal(responseBody, &geminiResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
if len(geminiResponse.Candidates) == 0 {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: "No candidates returned",
Type: "server_error",
Param: "",
Code: 500,
},
StatusCode: resp.StatusCode,
}, nil
return nil, types.NewError(errors.New("no candidates returned"), types.ErrorCodeBadResponseBody)
}
fullTextResponse := responseGeminiChat2OpenAI(c, &geminiResponse)
fullTextResponse.Model = info.UpstreamModelName
@@ -911,24 +902,25 @@ func GeminiChatHandler(c *gin.Context, resp *http.Response, info *relaycommon.Re
fullTextResponse.Usage = usage
jsonResponse, err := json.Marshal(fullTextResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, err = c.Writer.Write(jsonResponse)
return nil, &usage
c.Writer.Write(jsonResponse)
return &usage, nil
}
func GeminiEmbeddingHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func GeminiEmbeddingHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
defer common.CloseResponseBodyGracefully(resp)
responseBody, readErr := io.ReadAll(resp.Body)
if readErr != nil {
return nil, service.OpenAIErrorWrapper(readErr, "read_response_body_failed", http.StatusInternalServerError)
return nil, types.NewError(readErr, types.ErrorCodeBadResponseBody)
}
_ = resp.Body.Close()
var geminiResponse GeminiEmbeddingResponse
if jsonErr := json.Unmarshal(responseBody, &geminiResponse); jsonErr != nil {
return nil, service.OpenAIErrorWrapper(jsonErr, "unmarshal_response_body_failed", http.StatusInternalServerError)
if jsonErr := common.Unmarshal(responseBody, &geminiResponse); jsonErr != nil {
return nil, types.NewError(jsonErr, types.ErrorCodeBadResponseBody)
}
// convert to openai format response
@@ -949,21 +941,18 @@ func GeminiEmbeddingHandler(c *gin.Context, resp *http.Response, info *relaycomm
// Google has not yet clarified how embedding models will be billed
// refer to openai billing method to use input tokens billing
// https://platform.openai.com/docs/guides/embeddings#what-are-embeddings
usage = &dto.Usage{
usage := &dto.Usage{
PromptTokens: info.PromptTokens,
CompletionTokens: 0,
TotalTokens: info.PromptTokens,
}
openAIResponse.Usage = *usage.(*dto.Usage)
openAIResponse.Usage = *usage
jsonResponse, jsonErr := json.Marshal(openAIResponse)
jsonResponse, jsonErr := common.Marshal(openAIResponse)
if jsonErr != nil {
return nil, service.OpenAIErrorWrapper(jsonErr, "marshal_response_failed", http.StatusInternalServerError)
return nil, types.NewError(jsonErr, types.ErrorCodeBadResponseBody)
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, _ = c.Writer.Write(jsonResponse)
common.IOCopyBytesGracefully(c, resp, jsonResponse)
return usage, nil
}

View File

@@ -11,6 +11,7 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/common_handler"
"one-api/relay/constant"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -73,11 +74,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return request, nil
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.RelayMode == constant.RelayModeRerank {
err, usage = common_handler.RerankHandler(c, info, resp)
usage, err = common_handler.RerankHandler(c, info, resp)
} else if info.RelayMode == constant.RelayModeEmbeddings {
err, usage = openai.OpenaiHandler(c, resp, info)
usage, err = openai.OpenaiHandler(c, info, resp)
}
return
}

View File

@@ -3,6 +3,7 @@ package jina
var ModelList = []string{
"jina-clip-v1",
"jina-reranker-v2-base-multilingual",
"jina-reranker-m0",
}
var ChannelName = "jina"

View File

@@ -8,6 +8,7 @@ import (
"one-api/relay/channel"
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -69,11 +70,11 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.IsStream {
err, usage = openai.OaiStreamHandler(c, resp, info)
usage, err = openai.OaiStreamHandler(c, info, resp)
} else {
err, usage = openai.OpenaiHandler(c, resp, info)
usage, err = openai.OpenaiHandler(c, info, resp)
}
return
}

View File

@@ -9,6 +9,7 @@ import (
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
@@ -84,11 +85,11 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
switch info.RelayMode {
case constant.RelayModeEmbeddings:
err, usage = mokaEmbeddingHandler(c, resp)
return mokaEmbeddingHandler(c, info, resp)
default:
// err, usage = mokaHandler(c, resp)

View File

@@ -2,11 +2,14 @@ package mokaai
import (
"encoding/json"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/common"
"one-api/dto"
"one-api/service"
relaycommon "one-api/relay/common"
"one-api/types"
"github.com/gin-gonic/gin"
)
func embeddingRequestOpenAI2Moka(request dto.GeneralOpenAIRequest) *dto.EmbeddingRequest {
@@ -26,7 +29,7 @@ func embeddingRequestOpenAI2Moka(request dto.GeneralOpenAIRequest) *dto.Embeddin
}
return &dto.EmbeddingRequest{
Input: input,
Model: request.Model,
Model: request.Model,
}
}
@@ -47,19 +50,16 @@ func embeddingResponseMoka2OpenAI(response *dto.EmbeddingResponse) *dto.OpenAIEm
return &openAIEmbeddingResponse
}
func mokaEmbeddingHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func mokaEmbeddingHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
var baiduResponse dto.EmbeddingResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
common.CloseResponseBodyGracefully(resp)
err = json.Unmarshal(responseBody, &baiduResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
// if baiduResponse.ErrorMsg != "" {
// return &dto.OpenAIErrorWithStatusCode{
@@ -71,13 +71,12 @@ func mokaEmbeddingHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIError
// }, nil
// }
fullTextResponse := embeddingResponseMoka2OpenAI(&baiduResponse)
jsonResponse, err := json.Marshal(fullTextResponse)
jsonResponse, err := common.Marshal(fullTextResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, err = c.Writer.Write(jsonResponse)
return nil, &fullTextResponse.Usage
common.IOCopyBytesGracefully(c, resp, jsonResponse)
return &fullTextResponse.Usage, nil
}

View File

@@ -9,6 +9,7 @@ import (
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
relayconstant "one-api/relay/constant"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -74,14 +75,14 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.IsStream {
err, usage = openai.OaiStreamHandler(c, resp, info)
usage, err = openai.OaiStreamHandler(c, info, resp)
} else {
if info.RelayMode == relayconstant.RelayModeEmbeddings {
err, usage = ollamaEmbeddingHandler(c, resp, info.PromptTokens, info.UpstreamModelName, info.RelayMode)
usage, err = ollamaEmbeddingHandler(c, info, resp)
} else {
err, usage = openai.OpenaiHandler(c, resp, info)
usage, err = openai.OpenaiHandler(c, info, resp)
}
}
return

View File

@@ -1,15 +1,17 @@
package ollama
import (
"bytes"
"encoding/json"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/common"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/service"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
)
func requestOpenAI2Ollama(request dto.GeneralOpenAIRequest) (*OllamaRequest, error) {
@@ -82,22 +84,19 @@ func requestOpenAI2Embeddings(request dto.EmbeddingRequest) *OllamaEmbeddingRequ
}
}
func ollamaEmbeddingHandler(c *gin.Context, resp *http.Response, promptTokens int, model string, relayMode int) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func ollamaEmbeddingHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
var ollamaEmbeddingResponse OllamaEmbeddingResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
err = resp.Body.Close()
common.CloseResponseBodyGracefully(resp)
err = common.Unmarshal(responseBody, &ollamaEmbeddingResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
err = json.Unmarshal(responseBody, &ollamaEmbeddingResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
if ollamaEmbeddingResponse.Error != "" {
return service.OpenAIErrorWrapper(err, "ollama_error", resp.StatusCode), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
flattenedEmbeddings := flattenEmbeddings(ollamaEmbeddingResponse.Embedding)
data := make([]dto.OpenAIEmbeddingResponseItem, 0, 1)
@@ -106,46 +105,22 @@ func ollamaEmbeddingHandler(c *gin.Context, resp *http.Response, promptTokens in
Object: "embedding",
})
usage := &dto.Usage{
TotalTokens: promptTokens,
TotalTokens: info.PromptTokens,
CompletionTokens: 0,
PromptTokens: promptTokens,
PromptTokens: info.PromptTokens,
}
embeddingResponse := &dto.OpenAIEmbeddingResponse{
Object: "list",
Data: data,
Model: model,
Model: info.UpstreamModelName,
Usage: *usage,
}
doResponseBody, err := json.Marshal(embeddingResponse)
doResponseBody, err := common.Marshal(embeddingResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
resp.Body = io.NopCloser(bytes.NewBuffer(doResponseBody))
// We shouldn't set the header before we parse the response body, because the parse part may fail.
// And then we will have to send an error response, but in this case, the header has already been set.
// So the httpClient will be confused by the response.
// For example, Postman will report error, and we cannot check the response at all.
// Copy headers
for k, v := range resp.Header {
// 删除任何现有的相同头部,以防止重复添加头部
c.Writer.Header().Del(k)
for _, vv := range v {
c.Writer.Header().Add(k, vv)
}
}
// reset content length
c.Writer.Header().Del("Content-Length")
c.Writer.Header().Set("Content-Length", fmt.Sprintf("%d", len(doResponseBody)))
c.Writer.WriteHeader(resp.StatusCode)
_, err = io.Copy(c.Writer, resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "copy_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
return nil, usage
common.IOCopyBytesGracefully(c, resp, doResponseBody)
return usage, nil
}
func flattenEmbeddings(embeddings [][]float64) []float64 {

View File

@@ -9,8 +9,7 @@ import (
"mime/multipart"
"net/http"
"net/textproto"
"one-api/common"
constant2 "one-api/constant"
"one-api/constant"
"one-api/dto"
"one-api/relay/channel"
"one-api/relay/channel/ai360"
@@ -21,8 +20,9 @@ import (
"one-api/relay/channel/xinference"
relaycommon "one-api/relay/common"
"one-api/relay/common_handler"
"one-api/relay/constant"
relayconstant "one-api/relay/constant"
"one-api/service"
"one-api/types"
"path/filepath"
"strings"
@@ -54,7 +54,7 @@ func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
a.ChannelType = info.ChannelType
// initialize ThinkingContentInfo when thinking_to_content is enabled
if think2Content, ok := info.ChannelSetting[constant2.ChannelSettingThinkingToContent].(bool); ok && think2Content {
if info.ChannelSetting.ThinkingToContent {
info.ThinkingContentInfo = relaycommon.ThinkingContentInfo{
IsFirstThinkingContent: true,
SendLastThinkingContent: false,
@@ -67,7 +67,7 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
if info.RelayFormat == relaycommon.RelayFormatClaude {
return fmt.Sprintf("%s/v1/chat/completions", info.BaseUrl), nil
}
if info.RelayMode == constant.RelayModeRealtime {
if info.RelayMode == relayconstant.RelayModeRealtime {
if strings.HasPrefix(info.BaseUrl, "https://") {
baseUrl := strings.TrimPrefix(info.BaseUrl, "https://")
baseUrl = "wss://" + baseUrl
@@ -79,10 +79,10 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
}
}
switch info.ChannelType {
case common.ChannelTypeAzure:
case constant.ChannelTypeAzure:
apiVersion := info.ApiVersion
if apiVersion == "" {
apiVersion = constant2.AzureDefaultAPIVersion
apiVersion = constant.AzureDefaultAPIVersion
}
// https://learn.microsoft.com/en-us/azure/cognitive-services/openai/chatgpt-quickstart?pivots=rest-api&tabs=command-line#rest-api
requestURL := strings.Split(info.RequestURLPath, "?")[0]
@@ -90,25 +90,25 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
task := strings.TrimPrefix(requestURL, "/v1/")
// 特殊处理 responses API
if info.RelayMode == constant.RelayModeResponses {
if info.RelayMode == relayconstant.RelayModeResponses {
requestURL = fmt.Sprintf("/openai/v1/responses?api-version=preview")
return relaycommon.GetFullRequestURL(info.BaseUrl, requestURL, info.ChannelType), nil
}
model_ := info.UpstreamModelName
// 2025年5月10日后创建的渠道不移除.
if info.ChannelCreateTime < constant2.AzureNoRemoveDotTime {
if info.ChannelCreateTime < constant.AzureNoRemoveDotTime {
model_ = strings.Replace(model_, ".", "", -1)
}
// https://github.com/songquanpeng/one-api/issues/67
requestURL = fmt.Sprintf("/openai/deployments/%s/%s", model_, task)
if info.RelayMode == constant.RelayModeRealtime {
if info.RelayMode == relayconstant.RelayModeRealtime {
requestURL = fmt.Sprintf("/openai/realtime?deployment=%s&api-version=%s", model_, apiVersion)
}
return relaycommon.GetFullRequestURL(info.BaseUrl, requestURL, info.ChannelType), nil
case common.ChannelTypeMiniMax:
case constant.ChannelTypeMiniMax:
return minimax.GetRequestURL(info)
case common.ChannelTypeCustom:
case constant.ChannelTypeCustom:
url := info.BaseUrl
url = strings.Replace(url, "{model}", info.UpstreamModelName, -1)
return url, nil
@@ -119,14 +119,14 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
func (a *Adaptor) SetupRequestHeader(c *gin.Context, header *http.Header, info *relaycommon.RelayInfo) error {
channel.SetupApiRequestHeader(info, c, header)
if info.ChannelType == common.ChannelTypeAzure {
if info.ChannelType == constant.ChannelTypeAzure {
header.Set("api-key", info.ApiKey)
return nil
}
if info.ChannelType == common.ChannelTypeOpenAI && "" != info.Organization {
if info.ChannelType == constant.ChannelTypeOpenAI && "" != info.Organization {
header.Set("OpenAI-Organization", info.Organization)
}
if info.RelayMode == constant.RelayModeRealtime {
if info.RelayMode == relayconstant.RelayModeRealtime {
swp := c.Request.Header.Get("Sec-WebSocket-Protocol")
if swp != "" {
items := []string{
@@ -145,8 +145,8 @@ func (a *Adaptor) SetupRequestHeader(c *gin.Context, header *http.Header, info *
} else {
header.Set("Authorization", "Bearer "+info.ApiKey)
}
if info.ChannelType == common.ChannelTypeOpenRouter {
header.Set("HTTP-Referer", "https://github.com/Calcium-Ion/new-api")
if info.ChannelType == constant.ChannelTypeOpenRouter {
header.Set("HTTP-Referer", "https://www.newapi.ai")
header.Set("X-Title", "New API")
}
return nil
@@ -156,10 +156,10 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
if request == nil {
return nil, errors.New("request is nil")
}
if info.ChannelType != common.ChannelTypeOpenAI && info.ChannelType != common.ChannelTypeAzure {
if info.ChannelType != constant.ChannelTypeOpenAI && info.ChannelType != constant.ChannelTypeAzure {
request.StreamOptions = nil
}
if info.ChannelType == common.ChannelTypeOpenRouter {
if info.ChannelType == constant.ChannelTypeOpenRouter {
if len(request.Usage) == 0 {
request.Usage = json.RawMessage(`{"include":true}`)
}
@@ -205,7 +205,7 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.AudioRequest) (io.Reader, error) {
a.ResponseFormat = request.ResponseFormat
if info.RelayMode == constant.RelayModeAudioSpeech {
if info.RelayMode == relayconstant.RelayModeAudioSpeech {
jsonData, err := json.Marshal(request)
if err != nil {
return nil, fmt.Errorf("error marshalling object: %w", err)
@@ -254,7 +254,7 @@ func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInf
func (a *Adaptor) ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error) {
switch info.RelayMode {
case constant.RelayModeImagesEdits:
case relayconstant.RelayModeImagesEdits:
var requestBody bytes.Buffer
writer := multipart.NewWriter(&requestBody)
@@ -411,42 +411,42 @@ func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommo
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
if info.RelayMode == constant.RelayModeAudioTranscription ||
info.RelayMode == constant.RelayModeAudioTranslation ||
info.RelayMode == constant.RelayModeImagesEdits {
if info.RelayMode == relayconstant.RelayModeAudioTranscription ||
info.RelayMode == relayconstant.RelayModeAudioTranslation ||
info.RelayMode == relayconstant.RelayModeImagesEdits {
return channel.DoFormRequest(a, c, info, requestBody)
} else if info.RelayMode == constant.RelayModeRealtime {
} else if info.RelayMode == relayconstant.RelayModeRealtime {
return channel.DoWssRequest(a, c, info, requestBody)
} else {
return channel.DoApiRequest(a, c, info, requestBody)
}
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
switch info.RelayMode {
case constant.RelayModeRealtime:
case relayconstant.RelayModeRealtime:
err, usage = OpenaiRealtimeHandler(c, info)
case constant.RelayModeAudioSpeech:
err, usage = OpenaiTTSHandler(c, resp, info)
case constant.RelayModeAudioTranslation:
case relayconstant.RelayModeAudioSpeech:
usage = OpenaiTTSHandler(c, resp, info)
case relayconstant.RelayModeAudioTranslation:
fallthrough
case constant.RelayModeAudioTranscription:
case relayconstant.RelayModeAudioTranscription:
err, usage = OpenaiSTTHandler(c, resp, info, a.ResponseFormat)
case constant.RelayModeImagesGenerations, constant.RelayModeImagesEdits:
err, usage = OpenaiHandlerWithUsage(c, resp, info)
case constant.RelayModeRerank:
err, usage = common_handler.RerankHandler(c, info, resp)
case constant.RelayModeResponses:
case relayconstant.RelayModeImagesGenerations, relayconstant.RelayModeImagesEdits:
usage, err = OpenaiHandlerWithUsage(c, info, resp)
case relayconstant.RelayModeRerank:
usage, err = common_handler.RerankHandler(c, info, resp)
case relayconstant.RelayModeResponses:
if info.IsStream {
err, usage = OaiResponsesStreamHandler(c, resp, info)
usage, err = OaiResponsesStreamHandler(c, info, resp)
} else {
err, usage = OaiResponsesHandler(c, resp, info)
usage, err = OaiResponsesHandler(c, info, resp)
}
default:
if info.IsStream {
err, usage = OaiStreamHandler(c, resp, info)
usage, err = OaiStreamHandler(c, info, resp)
} else {
err, usage = OpenaiHandler(c, resp, info)
usage, err = OpenaiHandler(c, info, resp)
}
}
return
@@ -454,17 +454,17 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
func (a *Adaptor) GetModelList() []string {
switch a.ChannelType {
case common.ChannelType360:
case constant.ChannelType360:
return ai360.ModelList
case common.ChannelTypeMoonshot:
case constant.ChannelTypeMoonshot:
return moonshot.ModelList
case common.ChannelTypeLingYiWanWu:
case constant.ChannelTypeLingYiWanWu:
return lingyiwanwu.ModelList
case common.ChannelTypeMiniMax:
case constant.ChannelTypeMiniMax:
return minimax.ModelList
case common.ChannelTypeXinference:
case constant.ChannelTypeXinference:
return xinference.ModelList
case common.ChannelTypeOpenRouter:
case constant.ChannelTypeOpenRouter:
return openrouter.ModelList
default:
return ModelList
@@ -473,17 +473,17 @@ func (a *Adaptor) GetModelList() []string {
func (a *Adaptor) GetChannelName() string {
switch a.ChannelType {
case common.ChannelType360:
case constant.ChannelType360:
return ai360.ChannelName
case common.ChannelTypeMoonshot:
case constant.ChannelTypeMoonshot:
return moonshot.ChannelName
case common.ChannelTypeLingYiWanWu:
case constant.ChannelTypeLingYiWanWu:
return lingyiwanwu.ChannelName
case common.ChannelTypeMiniMax:
case constant.ChannelTypeMiniMax:
return minimax.ChannelName
case common.ChannelTypeXinference:
case constant.ChannelTypeXinference:
return xinference.ChannelName
case common.ChannelTypeOpenRouter:
case constant.ChannelTypeOpenRouter:
return openrouter.ChannelName
default:
return ChannelName

View File

@@ -2,7 +2,6 @@ package openai
import (
"bytes"
"encoding/json"
"fmt"
"io"
"math"
@@ -18,6 +17,8 @@ import (
"path/filepath"
"strings"
"one-api/types"
"github.com/bytedance/gopkg/util/gopool"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
@@ -34,7 +35,7 @@ func sendStreamData(c *gin.Context, info *relaycommon.RelayInfo, data string, fo
}
var lastStreamResponse dto.ChatCompletionsStreamResponse
if err := common.DecodeJsonStr(data, &lastStreamResponse); err != nil {
if err := common.UnmarshalJsonStr(data, &lastStreamResponse); err != nil {
return err
}
@@ -105,18 +106,19 @@ func sendStreamData(c *gin.Context, info *relaycommon.RelayInfo, data string, fo
return helper.ObjectData(c, lastStreamResponse)
}
func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func OaiStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
if resp == nil || resp.Body == nil {
common.LogError(c, "invalid response or response body")
return service.OpenAIErrorWrapper(fmt.Errorf("invalid response"), "invalid_response", http.StatusInternalServerError), nil
return nil, types.NewError(fmt.Errorf("invalid response"), types.ErrorCodeBadResponse)
}
containStreamUsage := false
defer common.CloseResponseBodyGracefully(resp)
model := info.UpstreamModelName
var responseId string
var createAt int64 = 0
var systemFingerprint string
model := info.UpstreamModelName
var containStreamUsage bool
var responseTextBuilder strings.Builder
var toolCount int
var usage = &dto.Usage{}
@@ -124,12 +126,12 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
var forceFormat bool
var thinkToContent bool
if forceFmt, ok := info.ChannelSetting[constant.ForceFormat].(bool); ok {
forceFormat = forceFmt
if info.ChannelSetting.ForceFormat {
forceFormat = true
}
if think2Content, ok := info.ChannelSetting[constant.ChannelSettingThinkingToContent].(bool); ok {
thinkToContent = think2Content
if info.ChannelSetting.ThinkingToContent {
thinkToContent = true
}
var (
@@ -148,31 +150,15 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
return true
})
// 处理最后的响应
shouldSendLastResp := true
var lastStreamResponse dto.ChatCompletionsStreamResponse
err := common.DecodeJsonStr(lastStreamData, &lastStreamResponse)
if err == nil {
responseId = lastStreamResponse.Id
createAt = lastStreamResponse.Created
systemFingerprint = lastStreamResponse.GetSystemFingerprint()
model = lastStreamResponse.Model
if service.ValidUsage(lastStreamResponse.Usage) {
containStreamUsage = true
usage = lastStreamResponse.Usage
if !info.ShouldIncludeUsage {
shouldSendLastResp = false
}
}
for _, choice := range lastStreamResponse.Choices {
if choice.FinishReason != nil {
shouldSendLastResp = true
}
}
if err := handleLastResponse(lastStreamData, &responseId, &createAt, &systemFingerprint, &model, &usage,
&containStreamUsage, info, &shouldSendLastResp); err != nil {
common.SysError("error handling last response: " + err.Error())
}
if shouldSendLastResp {
sendStreamData(c, info, lastStreamData, forceFormat, thinkToContent)
//err = handleStreamFormat(c, info, lastStreamData, forceFormat, thinkToContent)
if shouldSendLastResp && info.RelayFormat == relaycommon.RelayFormatOpenAI {
_ = sendStreamData(c, info, lastStreamData, forceFormat, thinkToContent)
}
// 处理token计算
@@ -184,7 +170,7 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
usage = service.ResponseText2Usage(responseTextBuilder.String(), info.UpstreamModelName, info.PromptTokens)
usage.CompletionTokens += toolCount * 7
} else {
if info.ChannelType == common.ChannelTypeDeepSeek {
if info.ChannelType == constant.ChannelTypeDeepSeek {
if usage.PromptCacheHitTokens != 0 {
usage.PromptTokensDetails.CachedTokens = usage.PromptCacheHitTokens
}
@@ -193,33 +179,28 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
handleFinalResponse(c, info, lastStreamData, responseId, createAt, model, systemFingerprint, usage, containStreamUsage)
return nil, usage
return usage, nil
}
func OpenaiHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func OpenaiHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
defer common.CloseResponseBodyGracefully(resp)
var simpleResponse dto.OpenAITextResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeReadResponseBodyFailed)
}
err = resp.Body.Close()
err = common.Unmarshal(responseBody, &simpleResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
err = common.DecodeJson(responseBody, &simpleResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
if simpleResponse.Error != nil && simpleResponse.Error.Type != "" {
return &dto.OpenAIErrorWithStatusCode{
Error: *simpleResponse.Error,
StatusCode: resp.StatusCode,
}, nil
return nil, types.WithOpenAIError(*simpleResponse.Error, resp.StatusCode)
}
forceFormat := false
if forceFmt, ok := info.ChannelSetting[constant.ForceFormat].(bool); ok {
forceFormat = forceFmt
if info.ChannelSetting.ForceFormat {
forceFormat = true
}
if simpleResponse.Usage.TotalTokens == 0 || (simpleResponse.Usage.PromptTokens == 0 && simpleResponse.Usage.CompletionTokens == 0) {
@@ -238,49 +219,35 @@ func OpenaiHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayI
switch info.RelayFormat {
case relaycommon.RelayFormatOpenAI:
if forceFormat {
responseBody, err = json.Marshal(simpleResponse)
responseBody, err = common.Marshal(simpleResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
} else {
break
}
case relaycommon.RelayFormatClaude:
claudeResp := service.ResponseOpenAI2Claude(&simpleResponse, info)
claudeRespStr, err := json.Marshal(claudeResp)
claudeRespStr, err := common.Marshal(claudeResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
responseBody = claudeRespStr
}
// Reset response body
resp.Body = io.NopCloser(bytes.NewBuffer(responseBody))
// We shouldn't set the header before we parse the response body, because the parse part may fail.
// And then we will have to send an error response, but in this case, the header has already been set.
// So the httpClient will be confused by the response.
// For example, Postman will report error, and we cannot check the response at all.
for k, v := range resp.Header {
c.Writer.Header().Set(k, v[0])
}
c.Writer.WriteHeader(resp.StatusCode)
_, err = io.Copy(c.Writer, resp.Body)
if err != nil {
//return service.OpenAIErrorWrapper(err, "copy_response_body_failed", http.StatusInternalServerError), nil
common.SysError("error copying response body: " + err.Error())
}
resp.Body.Close()
return nil, &simpleResponse.Usage
common.IOCopyBytesGracefully(c, resp, responseBody)
return &simpleResponse.Usage, nil
}
func OpenaiTTSHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func OpenaiTTSHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) *dto.Usage {
// the status code has been judged before, if there is a body reading failure,
// it should be regarded as a non-recoverable error, so it should not return err for external retry.
// Analogous to nginx's load balancing, it will only retry if it can't be requested or
// if the upstream returns a specific status code, once the upstream has already written the header,
// the subsequent failure of the response body should be regarded as a non-recoverable error,
// and can be terminated directly.
defer resp.Body.Close()
defer common.CloseResponseBodyGracefully(resp)
usage := &dto.Usage{}
usage.PromptTokens = info.PromptTokens
usage.TotalTokens = info.PromptTokens
@@ -293,38 +260,23 @@ func OpenaiTTSHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
if err != nil {
common.LogError(c, err.Error())
}
return nil, usage
return usage
}
func OpenaiSTTHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo, responseFormat string) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func OpenaiSTTHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo, responseFormat string) (*types.NewAPIError, *dto.Usage) {
defer common.CloseResponseBodyGracefully(resp)
// count tokens by audio file duration
audioTokens, err := countAudioTokens(c)
if err != nil {
return service.OpenAIErrorWrapper(err, "count_audio_tokens_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeCountTokenFailed), nil
}
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
return types.NewError(err, types.ErrorCodeReadResponseBodyFailed), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
// Reset response body
resp.Body = io.NopCloser(bytes.NewBuffer(responseBody))
// We shouldn't set the header before we parse the response body, because the parse part may fail.
// And then we will have to send an error response, but in this case, the header has already been set.
// So the httpClient will be confused by the response.
// For example, Postman will report error, and we cannot check the response at all.
for k, v := range resp.Header {
c.Writer.Header().Set(k, v[0])
}
c.Writer.WriteHeader(resp.StatusCode)
_, err = io.Copy(c.Writer, resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "copy_response_body_failed", http.StatusInternalServerError), nil
}
resp.Body.Close()
// 写入新的 response body
common.IOCopyBytesGracefully(c, resp, responseBody)
usage := &dto.Usage{}
usage.PromptTokens = audioTokens
@@ -375,9 +327,9 @@ func countAudioTokens(c *gin.Context) (int, error) {
return int(math.Round(math.Ceil(duration) / 60.0 * 1000)), nil // 1 minute 相当于 1k tokens
}
func OpenaiRealtimeHandler(c *gin.Context, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.RealtimeUsage) {
func OpenaiRealtimeHandler(c *gin.Context, info *relaycommon.RelayInfo) (*types.NewAPIError, *dto.RealtimeUsage) {
if info == nil || info.ClientWs == nil || info.TargetWs == nil {
return service.OpenAIErrorWrapper(fmt.Errorf("invalid websocket connection"), "invalid_connection", http.StatusBadRequest), nil
return types.NewError(fmt.Errorf("invalid websocket connection"), types.ErrorCodeBadResponse), nil
}
info.IsStream = true
@@ -415,7 +367,7 @@ func OpenaiRealtimeHandler(c *gin.Context, info *relaycommon.RelayInfo) (*dto.Op
}
realtimeEvent := &dto.RealtimeEvent{}
err = json.Unmarshal(message, realtimeEvent)
err = common.Unmarshal(message, realtimeEvent)
if err != nil {
errChan <- fmt.Errorf("error unmarshalling message: %v", err)
return
@@ -475,7 +427,7 @@ func OpenaiRealtimeHandler(c *gin.Context, info *relaycommon.RelayInfo) (*dto.Op
}
info.SetFirstResponseTime()
realtimeEvent := &dto.RealtimeEvent{}
err = json.Unmarshal(message, realtimeEvent)
err = common.Unmarshal(message, realtimeEvent)
if err != nil {
errChan <- fmt.Errorf("error unmarshalling message: %v", err)
return
@@ -522,9 +474,9 @@ func OpenaiRealtimeHandler(c *gin.Context, info *relaycommon.RelayInfo) (*dto.Op
localUsage = &dto.RealtimeUsage{}
// print now usage
}
//common.LogInfo(c, fmt.Sprintf("realtime streaming sumUsage: %v", sumUsage))
//common.LogInfo(c, fmt.Sprintf("realtime streaming localUsage: %v", localUsage))
//common.LogInfo(c, fmt.Sprintf("realtime streaming localUsage: %v", localUsage))
common.LogInfo(c, fmt.Sprintf("realtime streaming sumUsage: %v", sumUsage))
common.LogInfo(c, fmt.Sprintf("realtime streaming localUsage: %v", localUsage))
common.LogInfo(c, fmt.Sprintf("realtime streaming localUsage: %v", localUsage))
} else if realtimeEvent.Type == dto.RealtimeEventTypeSessionUpdated || realtimeEvent.Type == dto.RealtimeEventTypeSessionCreated {
realtimeSession := realtimeEvent.Session
@@ -600,41 +552,26 @@ func preConsumeUsage(ctx *gin.Context, info *relaycommon.RelayInfo, usage *dto.R
return err
}
func OpenaiHandlerWithUsage(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func OpenaiHandlerWithUsage(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
defer common.CloseResponseBodyGracefully(resp)
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
// Reset response body
resp.Body = io.NopCloser(bytes.NewBuffer(responseBody))
// We shouldn't set the header before we parse the response body, because the parse part may fail.
// And then we will have to send an error response, but in this case, the header has already been set.
// So the httpClient will be confused by the response.
// For example, Postman will report error, and we cannot check the response at all.
for k, v := range resp.Header {
c.Writer.Header().Set(k, v[0])
}
// reset content length
c.Writer.Header().Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
c.Writer.WriteHeader(resp.StatusCode)
_, err = io.Copy(c.Writer, resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "copy_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeReadResponseBodyFailed)
}
var usageResp dto.SimpleResponse
err = json.Unmarshal(responseBody, &usageResp)
err = common.Unmarshal(responseBody, &usageResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "parse_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
// 写入新的 response body
common.IOCopyBytesGracefully(c, resp, responseBody)
// Once we've written to the client, we should not return errors anymore
// because the upstream has already consumed resources and returned content
// We should still perform billing even if parsing fails
// format
if usageResp.InputTokens > 0 {
usageResp.PromptTokens += usageResp.InputTokens
@@ -646,5 +583,5 @@ func OpenaiHandlerWithUsage(c *gin.Context, resp *http.Response, info *relaycomm
usageResp.PromptTokensDetails.ImageTokens += usageResp.InputTokensDetails.ImageTokens
usageResp.PromptTokensDetails.TextTokens += usageResp.InputTokensDetails.TextTokens
}
return nil, &usageResp.Usage
return &usageResp.Usage, nil
}

View File

@@ -1,7 +1,6 @@
package openai
import (
"bytes"
"fmt"
"io"
"net/http"
@@ -10,53 +9,32 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"one-api/types"
"strings"
"github.com/gin-gonic/gin"
)
func OaiResponsesHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func OaiResponsesHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
defer common.CloseResponseBodyGracefully(resp)
// read response body
var responsesResponse dto.OpenAIResponsesResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeReadResponseBodyFailed)
}
err = resp.Body.Close()
err = common.Unmarshal(responseBody, &responsesResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
err = common.DecodeJson(responseBody, &responsesResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
return nil, types.NewError(err, types.ErrorCodeBadResponseBody)
}
if responsesResponse.Error != nil {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: responsesResponse.Error.Message,
Type: "openai_error",
Code: responsesResponse.Error.Code,
},
StatusCode: resp.StatusCode,
}, nil
return nil, types.WithOpenAIError(*responsesResponse.Error, resp.StatusCode)
}
// reset response body
resp.Body = io.NopCloser(bytes.NewBuffer(responseBody))
// We shouldn't set the header before we parse the response body, because the parse part may fail.
// And then we will have to send an error response, but in this case, the header has already been set.
// So the httpClient will be confused by the response.
// For example, Postman will report error, and we cannot check the response at all.
for k, v := range resp.Header {
c.Writer.Header().Set(k, v[0])
}
c.Writer.WriteHeader(resp.StatusCode)
// copy response body
_, err = io.Copy(c.Writer, resp.Body)
if err != nil {
common.SysError("error copying response body: " + err.Error())
}
resp.Body.Close()
// 写入新的 response body
common.IOCopyBytesGracefully(c, resp, responseBody)
// compute usage
usage := dto.Usage{}
usage.PromptTokens = responsesResponse.Usage.InputTokens
@@ -66,13 +44,13 @@ func OaiResponsesHandler(c *gin.Context, resp *http.Response, info *relaycommon.
for _, tool := range responsesResponse.Tools {
info.ResponsesUsageInfo.BuiltInTools[tool.Type].CallCount++
}
return nil, &usage
return &usage, nil
}
func OaiResponsesStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func OaiResponsesStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
if resp == nil || resp.Body == nil {
common.LogError(c, "invalid response or response body")
return service.OpenAIErrorWrapper(fmt.Errorf("invalid response"), "invalid_response", http.StatusInternalServerError), nil
return nil, types.NewError(fmt.Errorf("invalid response"), types.ErrorCodeBadResponse)
}
var usage = &dto.Usage{}
@@ -82,7 +60,7 @@ func OaiResponsesStreamHandler(c *gin.Context, resp *http.Response, info *relayc
// 检查当前数据是否包含 completed 状态和 usage 信息
var streamResponse dto.ResponsesStreamResponse
if err := common.DecodeJsonStr(data, &streamResponse); err == nil {
if err := common.UnmarshalJsonStr(data, &streamResponse); err == nil {
sendResponsesStreamData(c, streamResponse, data)
switch streamResponse.Type {
case "response.completed":
@@ -115,5 +93,5 @@ func OaiResponsesStreamHandler(c *gin.Context, resp *http.Response, info *relayc
}
}
return nil, usage
return usage, nil
}

View File

@@ -9,6 +9,7 @@ import (
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/service"
"one-api/types"
"github.com/gin-gonic/gin"
)
@@ -70,13 +71,13 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.IsStream {
var responseText string
err, responseText = palmStreamHandler(c, resp)
usage = service.ResponseText2Usage(responseText, info.UpstreamModelName, info.PromptTokens)
} else {
err, usage = palmHandler(c, resp, info.PromptTokens, info.UpstreamModelName)
usage, err = palmHandler(c, info, resp)
}
return
}

Some files were not shown because too many files have changed in this diff Show More