fix(macos): ensure exec approval prompt displays the command (#5042)

* fix(config): migrate audio.transcription with any CLI command

Two bugs fixed:
1. Removed CLI allowlist from mapLegacyAudioTranscription - the modern
   config format has no such restriction, so the allowlist only blocked
   legacy migration of valid configs like whisperx-transcribe.sh
2. Moved audio.transcription migration to a separate migration entry -
   it was nested inside routing.config-v2 which early-exited when no
   routing section existed

Closes #5017

* fix(macos): ensure exec approval prompt displays the command

The NSStackView and NSScrollView for the command text lacked proper
width constraints, causing the accessory view to collapse to zero
width in some cases. This fix:

1. Adds minimum width constraint (380px) to the root stack view
2. Adds minimum width constraint to the command scroll view
3. Enables vertical resizing and scrolling for long commands
4. Adds max height constraint to prevent excessively tall prompts

Closes #5038

* fix: validate legacy audio transcription migration input (openclaw#5042) thanks @shayan919293

* docs: add changelog note for legacy audio migration guard (openclaw#5042) thanks @shayan919293

* fix: satisfy lint on audio transcription migration braces (openclaw#5042) thanks @shayan919293

---------

Co-authored-by: Peter Steinberger <steipete@gmail.com>
This commit is contained in:
shayan919293
2026-02-13 06:49:06 -08:00
committed by GitHub
parent a7d6e44719
commit ab4adf7170
5 changed files with 97 additions and 41 deletions

View File

@@ -369,46 +369,53 @@ export const LEGACY_CONFIG_MIGRATIONS_PART_2: LegacyConfigMigration[] = [
changes.push("Removed routing.transcribeAudio (tools.media.audio.models already set).");
}
} else {
changes.push("Removed routing.transcribeAudio (unsupported transcription CLI).");
changes.push("Removed routing.transcribeAudio (invalid or empty command).");
}
delete routing.transcribeAudio;
}
const audio = getRecord(raw.audio);
if (audio?.transcription !== undefined) {
const mapped = mapLegacyAudioTranscription(audio.transcription);
if (mapped) {
const tools = ensureRecord(raw, "tools");
const media = ensureRecord(tools, "media");
const mediaAudio = ensureRecord(media, "audio");
const models = Array.isArray(mediaAudio.models) ? (mediaAudio.models as unknown[]) : [];
if (models.length === 0) {
mediaAudio.enabled = true;
mediaAudio.models = [mapped];
changes.push("Moved audio.transcription → tools.media.audio.models.");
} else {
changes.push("Removed audio.transcription (tools.media.audio.models already set).");
}
delete audio.transcription;
if (Object.keys(audio).length === 0) {
delete raw.audio;
} else {
raw.audio = audio;
}
} else {
delete audio.transcription;
changes.push("Removed audio.transcription (unsupported transcription CLI).");
if (Object.keys(audio).length === 0) {
delete raw.audio;
} else {
raw.audio = audio;
}
}
}
if (Object.keys(routing).length === 0) {
delete raw.routing;
}
},
},
{
id: "audio.transcription-v2",
describe: "Move audio.transcription to tools.media.audio.models",
apply: (raw, changes) => {
const audio = getRecord(raw.audio);
if (audio?.transcription === undefined) {
return;
}
const mapped = mapLegacyAudioTranscription(audio.transcription);
if (mapped) {
const tools = ensureRecord(raw, "tools");
const media = ensureRecord(tools, "media");
const mediaAudio = ensureRecord(media, "audio");
const models = Array.isArray(mediaAudio.models) ? (mediaAudio.models as unknown[]) : [];
if (models.length === 0) {
mediaAudio.enabled = true;
mediaAudio.models = [mapped];
changes.push("Moved audio.transcription → tools.media.audio.models.");
} else {
changes.push("Removed audio.transcription (tools.media.audio.models already set).");
}
delete audio.transcription;
if (Object.keys(audio).length === 0) {
delete raw.audio;
} else {
raw.audio = audio;
}
} else {
delete audio.transcription;
changes.push("Removed audio.transcription (invalid or empty command).");
if (Object.keys(audio).length === 0) {
delete raw.audio;
} else {
raw.audio = audio;
}
}
},
},
];