Some OpenAI-format providers (via pi-ai) pre-subtract cached_tokens from
prompt_tokens upstream. When cached_tokens exceeds prompt_tokens due to
provider inconsistencies the subtraction produces a negative input value
that flows through to the TUI status bar and /usage dashboard.
Clamp rawInput to 0 in normalizeUsage() so downstream consumers never
see nonsensical negative token counts.
Closes#30765
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Kimi K2 models use automatic prefix caching and return cache stats in
a nested field: usage.prompt_tokens_details.cached_tokens
This fixes issue #7073 where cacheRead was showing 0 for K2.5 users.
Also adds cached_tokens (top-level) for moonshot-v1 explicit caching API.
Closes#7073
Verified:
- CI checks for commit 86a7ecb45ebf0be61dce9261398000524fd9fab6
- Rebase conflict resolution for compatibility with latest main
Co-authored-by: vpesh <9496634+vpesh@users.noreply.github.com>
* initial commit
* feat: implement deriveSessionTotalTokens function and update usage tests
* Added deriveSessionTotalTokens function to calculate total tokens based on usage and context tokens.
* Updated usage tests to include cases for derived session total tokens.
* Refactored session usage calculations in multiple files to utilize the new function for improved accuracy.
* fix: restore overflow truncation fallback + changelog/test hardening (#11551) (thanks @tyler6204)