Implement comprehensive topup billing system with user history viewing and admin management capabilities.
## Features Added
### Frontend
- Add topup history modal with paginated billing records
- Display order details: trade number, payment method, amount, money, status, create time
- Implement empty state with proper illustrations
- Add payment method column with localized display (Stripe, Alipay, WeChat)
- Add admin manual completion feature for pending orders
- Add Coins icon for recharge amount display
- Integrate "Bills" button in RechargeCard header
- Optimize code quality by using shared utility functions (isAdmin)
- Extract constants for status and payment method mappings
- Use React.useMemo for performance optimization
### Backend
- Create GET `/api/user/topup/self` endpoint for user topup history with pagination
- Create POST `/api/user/topup/complete` endpoint for admin manual order completion
- Add `payment_method` field to TopUp model for tracking payment types
- Implement `GetUserTopUps` method with proper pagination and ordering
- Implement `ManualCompleteTopUp` with transaction safety and row-level locking
- Add application-level mutex locks to prevent concurrent order processing
- Record payment method in Epay and Stripe payment flows
- Ensure idempotency and data consistency with proper error handling
### Internationalization
- Add i18n keys for Chinese (zh), English (en), and French (fr)
- Support for billing-related UI text and status messages
## Technical Improvements
- Use database transactions with FOR UPDATE row-level locking
- Implement sync.Map-based mutex for order-level concurrency control
- Proper error handling and user-friendly toast notifications
- Follow existing codebase patterns for empty states and modals
- Maintain code quality with extracted render functions and constants
## Files Changed
- Backend: controller/topup.go, controller/topup_stripe.go, model/topup.go, router/api-router.go
- Frontend: web/src/components/topup/modals/TopupHistoryModal.jsx (new), web/src/components/topup/RechargeCard.jsx, web/src/components/topup/index.jsx
- i18n: web/src/i18n/locales/{zh,en,fr}.json
Note
MT (Machine Translation): This document is machine translated. For the most accurate information, please refer to the Chinese version.
📝 Project Description
Note
This is an open-source project developed based on One API
Important
- This project is for personal learning purposes only, with no guarantee of stability or technical support.
- Users must comply with OpenAI's Terms of Use and applicable laws and regulations, and must not use it for illegal purposes.
- According to the 《Interim Measures for the Management of Generative Artificial Intelligence Services》, please do not provide any unregistered generative AI services to the public in China.
🤝 Trusted Partners
No particular order
📚 Documentation
For detailed documentation, please visit our official Wiki: https://docs.newapi.pro/
You can also access the AI-generated DeepWiki:
✨ Key Features
New API offers a wide range of features, please refer to Features Introduction for details:
- 🎨 Brand new UI interface
- 🌍 Multi-language support
- 💰 Online recharge functionality, currently supports EPay and Stripe
- 🔍 Support for querying usage quotas with keys (works with neko-api-key-tool)
- 🔄 Compatible with the original One API database
- 💵 Support for pay-per-use model pricing
- ⚖️ Support for weighted random channel selection
- 📈 Data dashboard (console)
- 🔒 Token grouping and model restrictions
- 🤖 Support for more authorization login methods (LinuxDO, Telegram, OIDC)
- 🔄 Support for Rerank models (Cohere and Jina), API Documentation
- ⚡ Support for OpenAI Realtime API (including Azure channels), API Documentation
- ⚡ Support for OpenAI Responses format, API Documentation
- ⚡ Support for Claude Messages format, API Documentation
- ⚡ Support for Google Gemini format, API Documentation
- 🧠 Support for setting reasoning effort through model name suffixes:
- OpenAI o-series models
- Add
-highsuffix for high reasoning effort (e.g.:o3-mini-high) - Add
-mediumsuffix for medium reasoning effort (e.g.:o3-mini-medium) - Add
-lowsuffix for low reasoning effort (e.g.:o3-mini-low)
- Add
- Claude thinking models
- Add
-thinkingsuffix to enable thinking mode (e.g.:claude-3-7-sonnet-20250219-thinking)
- Add
- OpenAI o-series models
- 🔄 Thinking-to-content functionality
- 🔄 Model rate limiting for users
- 🔄 Request format conversion functionality, supporting the following three format conversions:
- OpenAI Chat Completions => Claude Messages
- Claude Messages => OpenAI Chat Completions (can be used for Claude Code to call third-party models)
- OpenAI Chat Completions => Gemini Chat
- 💰 Cache billing support, which allows billing at a set ratio when cache is hit:
- Set the
Prompt Cache Ratiooption inSystem Settings-Operation Settings - Set
Prompt Cache Ratioin the channel, range 0-1, e.g., setting to 0.5 means billing at 50% when cache is hit - Supported channels:
- OpenAI
- Azure
- DeepSeek
- Claude
- Set the
Model Support
This version supports multiple models, please refer to API Documentation-Relay Interface for details:
- Third-party models gpts (gpt-4-gizmo-*)
- Third-party channel Midjourney-Proxy(Plus) interface, API Documentation
- Third-party channel Suno API interface, API Documentation
- Custom channels, supporting full call address input
- Rerank models (Cohere and Jina), API Documentation
- Claude Messages format, API Documentation
- Google Gemini format, API Documentation
- Dify, currently only supports chatflow
- For more interfaces, please refer to API Documentation
Environment Variable Configuration
For detailed configuration instructions, please refer to Installation Guide-Environment Variables Configuration:
GENERATE_DEFAULT_TOKEN: Whether to generate initial tokens for newly registered users, default isfalseSTREAMING_TIMEOUT: Streaming response timeout, default is 300 secondsDIFY_DEBUG: Whether to output workflow and node information for Dify channels, default istrueGET_MEDIA_TOKEN: Whether to count image tokens, default istrueGET_MEDIA_TOKEN_NOT_STREAM: Whether to count image tokens in non-streaming cases, default istrueUPDATE_TASK: Whether to update asynchronous tasks (Midjourney, Suno), default istrueGEMINI_VISION_MAX_IMAGE_NUM: Maximum number of images for Gemini models, default is16MAX_FILE_DOWNLOAD_MB: Maximum file download size in MB, default is20CRYPTO_SECRET: Encryption key used for encrypting Redis database contentAZURE_DEFAULT_API_VERSION: Azure channel default API version, default is2025-04-01-previewNOTIFICATION_LIMIT_DURATION_MINUTE: Notification limit duration, default is10minutesNOTIFY_LIMIT_COUNT: Maximum number of user notifications within the specified duration, default is2ERROR_LOG_ENABLED=true: Whether to record and display error logs, default isfalse
Deployment
For detailed deployment guides, please refer to Installation Guide-Deployment Methods:
Tip
Latest Docker image:
calciumion/new-api:latest
Multi-machine Deployment Considerations
- Environment variable
SESSION_SECRETmust be set, otherwise login status will be inconsistent across multiple machines - If sharing Redis,
CRYPTO_SECRETmust be set, otherwise Redis content cannot be accessed across multiple machines
Deployment Requirements
- Local database (default): SQLite (Docker deployment must mount the
/datadirectory) - Remote database: MySQL version >= 5.7.8, PgSQL version >= 9.6
Deployment Methods
Using BaoTa Panel Docker Feature
Install BaoTa Panel (version 9.2.0 or above), find New-API in the application store and install it. Tutorial with images
Using Docker Compose (Recommended)
# Download the project
git clone https://github.com/Calcium-Ion/new-api.git
cd new-api
# Edit docker-compose.yml as needed
# Start
docker-compose up -d
Using Docker Image Directly
# Using SQLite
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
# Using MySQL
docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
Channel Retry and Cache
Channel retry functionality has been implemented, you can set the number of retries in Settings->Operation Settings->General Settings->Failure Retry Count, recommended to enable caching functionality.
Cache Configuration Method
REDIS_CONN_STRING: Set Redis as cacheMEMORY_CACHE_ENABLED: Enable memory cache (no need to set manually if Redis is set)
API Documentation
For detailed API documentation, please refer to API Documentation:
- Chat API (Chat Completions)
- Response API (Responses)
- Image API (Image)
- Rerank API (Rerank)
- Realtime Chat API (Realtime)
- Claude Chat API
- Google Gemini Chat API
Related Projects
- One API: Original project
- Midjourney-Proxy: Midjourney interface support
- neko-api-key-tool: Query usage quota with key
Other projects based on New API:
- new-api-horizon: High-performance optimized version of New API
Help and Support
If you have any questions, please refer to Help and Support:





