[fix] Add cache token capture for Droid OpenAI endpoint

The _parseOpenAIUsageFromSSE method was not capturing cache-related
tokens (cache_read_input_tokens, cache_creation_input_tokens) from
OpenAI format responses, while the Anthropic endpoint correctly
captured them.

This fix adds extraction of:
- cached_tokens from input_tokens_details
- cache_creation_input_tokens from both input_tokens_details and
  top-level usage object

This ensures proper cache statistics tracking and cost calculation
for OpenAI models (like GPT-5/Codex) when using the Droid provider.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
John Doe
2025-12-06 23:00:54 +03:00
parent ebecee4c6f
commit b3e27e9f15

View File

@@ -737,6 +737,14 @@ class DroidRelayService {
currentUsageData.output_tokens = 0 currentUsageData.output_tokens = 0
} }
// Capture cache tokens from OpenAI format
currentUsageData.cache_read_input_tokens =
data.usage.input_tokens_details?.cached_tokens || 0
currentUsageData.cache_creation_input_tokens =
data.usage.input_tokens_details?.cache_creation_input_tokens ||
data.usage.cache_creation_input_tokens ||
0
logger.debug('📊 Droid OpenAI usage:', currentUsageData) logger.debug('📊 Droid OpenAI usage:', currentUsageData)
} }
@@ -758,6 +766,14 @@ class DroidRelayService {
currentUsageData.output_tokens = 0 currentUsageData.output_tokens = 0
} }
// Capture cache tokens from OpenAI Response API format
currentUsageData.cache_read_input_tokens =
usage.input_tokens_details?.cached_tokens || 0
currentUsageData.cache_creation_input_tokens =
usage.input_tokens_details?.cache_creation_input_tokens ||
usage.cache_creation_input_tokens ||
0
logger.debug('📊 Droid OpenAI response usage:', currentUsageData) logger.debug('📊 Droid OpenAI response usage:', currentUsageData)
} }
} catch (parseError) { } catch (parseError) {