When the backoff saturates at 60 min and retries fire every 30 min
(e.g. cron jobs), each failed request was resetting cooldownUntil to
now+60m. Because now+60m < existing deadline, the window kept getting
renewed and the profile never recovered without manually clearing
usageStats in auth-profiles.json.
Fix: only write a new cooldownUntil (or disabledUntil for billing) when
the new deadline is strictly later than the existing one. This lets the
original window expire naturally while still allowing genuine backoff
extension when error counts climb further.
Fixes#23516
[AI-assisted]
The test verifies that cooldownUntil IS cleared when it equals exactly
`now` (>= comparison), but the test name said "does not clear". Fixed
the name to match the actual assertion behavior.
When an auth profile hits a rate limit, `errorCount` is incremented and
`cooldownUntil` is set with exponential backoff. After the cooldown
expires, the time-based check correctly returns false — but `errorCount`
persists. The next transient failure immediately escalates to a much
longer cooldown because the backoff formula uses the stale count:
60s × 5^(errorCount-1), max 1h
This creates a positive feedback loop where profiles appear permanently
stuck after rate limits, requiring manual JSON editing to recover.
Add `clearExpiredCooldowns()` which sweeps all profiles on every call to
`resolveAuthProfileOrder()` and clears expired `cooldownUntil` /
`disabledUntil` values along with resetting `errorCount` and
`failureCounts` — giving the profile a fair retry window (circuit-breaker
half-open → closed transition).
Key design decisions:
- `cooldownUntil` and `disabledUntil` handled independently (a profile
can have both; only the expired one is cleared)
- `errorCount` reset only when ALL unusable windows have expired
- `lastFailureAt` preserved for the existing failureWindowMs decay logic
- In-memory mutation; disk persistence happens lazily on the next store
write, matching the existing save pattern
Fixes#3604
Related: #13623, #15851, #11972, #8434
* Fix subagent announce race and timeout handling
Bug 1: Subagent announce fires before model failover retries finish
- Problem: CLI provider emitted lifecycle error on each attempt, causing
subagent registry to prematurely call beginSubagentCleanup() and announce
with incorrect status before failover retries completed
- Fix: Removed lifecycle error emission from CLI provider's attempt-level
.catch() in agent-runner-execution.ts. Errors still propagate to
runWithModelFallback for retry, but no intermediate lifecycle events
are emitted. Only the final outcome (after all retries) emits lifecycle
events.
Bug 2: Hard 600s per-prompt timeout ignores runTimeoutSeconds=0
- Problem: When runTimeoutSeconds=0 (meaning 'no timeout'), the code
returned the default 600s timeout instead of respecting the 0 setting
- Fix: Modified resolveAgentTimeoutMs() to treat 0 as 'no timeout' and
return a very large timeout value (30 days) instead of the default.
This avoids setTimeout issues with Infinity while effectively providing
unlimited time for long-running tasks.
* fix: emit lifecycle:error for CLI failures (#6621) (thanks @tyler6204)
* chore: satisfy format/lint gates (#6621) (thanks @tyler6204)
* fix: restore build after upstream type changes (#6621) (thanks @tyler6204)
* test: fix createSystemPromptOverride tests to match new return type (#6621) (thanks @tyler6204)