* Agents: add subagent orchestration controls
* Agents: add subagent orchestration controls (WIP uncommitted changes)
* feat(subagents): add depth-based spawn gating for sub-sub-agents
* feat(subagents): tool policy, registry, and announce chain for nested agents
* feat(subagents): system prompt, docs, changelog for nested sub-agents
* fix(subagents): prevent model fallback override, show model during active runs, and block context overflow fallback
Bug 1: When a session has an explicit model override (e.g., gpt/openai-codex),
the fallback candidate logic in resolveFallbackCandidates silently appended the
global primary model (opus) as a backstop. On reinjection/steer with a transient
error, the session could fall back to opus which has a smaller context window
and crash. Fix: when storedModelOverride is set, pass fallbacksOverride ?? []
instead of undefined, preventing the implicit primary backstop.
Bug 2: Active subagents showed 'model n/a' in /subagents list because
resolveModelDisplay only read entry.model/modelProvider (populated after run
completes). Fix: fall back to modelOverride/providerOverride fields which are
populated at spawn time via sessions.patch.
Bug 3: Context overflow errors (prompt too long, context_length_exceeded) could
theoretically escape runEmbeddedPiAgent and be treated as failover candidates
in runWithModelFallback, causing a switch to a model with a smaller context
window. Fix: in runWithModelFallback, detect context overflow errors via
isLikelyContextOverflowError and rethrow them immediately instead of trying the
next model candidate.
* fix(subagents): track spawn depth in session store and fix announce routing for nested agents
* Fix compaction status tracking and dedupe overflow compaction triggers
* fix(subagents): enforce depth block via session store and implement cascade kill
* fix: inject group chat context into system prompt
* fix(subagents): always write model to session store at spawn time
* Preserve spawnDepth when agent handler rewrites session entry
* fix(subagents): suppress announce on steer-restart
* fix(subagents): fallback spawned session model to runtime default
* fix(subagents): enforce spawn depth when caller key resolves by sessionId
* feat(subagents): implement active-first ordering for numeric targets and enhance task display
- Added a test to verify that subagents with numeric targets follow an active-first list ordering.
- Updated `resolveSubagentTarget` to sort subagent runs based on active status and recent activity.
- Enhanced task display in command responses to prevent truncation of long task descriptions.
- Introduced new utility functions for compacting task text and managing subagent run states.
* fix(subagents): show model for active runs via run record fallback
When the spawned model matches the agent's default model, the session
store's override fields are intentionally cleared (isDefault: true).
The model/modelProvider fields are only populated after the run
completes. This left active subagents showing 'model n/a'.
Fix: store the resolved model on SubagentRunRecord at registration
time, and use it as a fallback in both display paths (subagents tool
and /subagents command) when the session store entry has no model info.
Changes:
- SubagentRunRecord: add optional model field
- registerSubagentRun: accept and persist model param
- sessions-spawn-tool: pass resolvedModel to registerSubagentRun
- subagents-tool: pass run record model as fallback to resolveModelDisplay
- commands-subagents: pass run record model as fallback to resolveModelDisplay
* feat(chat): implement session key resolution and reset on sidebar navigation
- Added functions to resolve the main session key and reset chat state when switching sessions from the sidebar.
- Updated the `renderTab` function to handle session key changes when navigating to the chat tab.
- Introduced a test to verify that the session resets to "main" when opening chat from the sidebar navigation.
* fix: subagent timeout=0 passthrough and fallback prompt duplication
Bug 1: runTimeoutSeconds=0 now means 'no timeout' instead of applying 600s default
- sessions-spawn-tool: default to undefined (not 0) when neither timeout param
is provided; use != null check so explicit 0 passes through to gateway
- agent.ts: accept 0 as valid timeout (resolveAgentTimeoutMs already handles
0 → MAX_SAFE_TIMEOUT_MS)
Bug 2: model fallback no longer re-injects the original prompt as a duplicate
- agent.ts: track fallback attempt index; on retries use a short continuation
message instead of the full original prompt since the session file already
contains it from the first attempt
- Also skip re-sending images on fallback retries (already in session)
* feat(subagents): truncate long task descriptions in subagents command output
- Introduced a new utility function to format task previews, limiting their length to improve readability.
- Updated the command handler to use the new formatting function, ensuring task descriptions are truncated appropriately.
- Adjusted related tests to verify that long task descriptions are now truncated in the output.
* refactor(subagents): update subagent registry path resolution and improve command output formatting
- Replaced direct import of STATE_DIR with a utility function to resolve the state directory dynamically.
- Enhanced the formatting of command output for active and recent subagents, adding separators for better readability.
- Updated related tests to reflect changes in command output structure.
* fix(subagent): default sessions_spawn to no timeout when runTimeoutSeconds omitted
The previous fix (75a791106) correctly handled the case where
runTimeoutSeconds was explicitly set to 0 ("no timeout"). However,
when models omit the parameter entirely (which is common since the
schema marks it as optional), runTimeoutSeconds resolved to undefined.
undefined flowed through the chain as:
sessions_spawn → timeout: undefined (since undefined != null is false)
→ gateway agent handler → agentCommand opts.timeout: undefined
→ resolveAgentTimeoutMs({ overrideSeconds: undefined })
→ DEFAULT_AGENT_TIMEOUT_SECONDS (600s = 10 minutes)
This caused subagents to be killed at exactly 10 minutes even though
the user's intent (via TOOLS.md) was for subagents to run without a
timeout.
Fix: default runTimeoutSeconds to 0 (no timeout) when neither
runTimeoutSeconds nor timeoutSeconds is provided by the caller.
Subagent spawns are long-running by design and should not inherit the
600s agent-command default timeout.
* fix(subagent): accept timeout=0 in agent-via-gateway path (second 600s default)
* fix: thread timeout override through getReplyFromConfig dispatch path
getReplyFromConfig called resolveAgentTimeoutMs({ cfg }) with no override,
always falling back to the config default (600s). Add timeoutOverrideSeconds
to GetReplyOptions and pass it through as overrideSeconds so callers of the
dispatch chain can specify a custom timeout (0 = no timeout).
This complements the existing timeout threading in agentCommand and the
cron isolated-agent runner, which already pass overrideSeconds correctly.
* feat(model-fallback): normalize OpenAI Codex model references and enhance fallback handling
- Added normalization for OpenAI Codex model references, specifically converting "gpt-5.3-codex" to "openai-codex" before execution.
- Updated the `resolveFallbackCandidates` function to utilize the new normalization logic.
- Enhanced tests to verify the correct behavior of model normalization and fallback mechanisms.
- Introduced a new test case to ensure that the normalization process works as expected for various input formats.
* feat(tests): add unit tests for steer failure behavior in openclaw-tools
- Introduced a new test file to validate the behavior of subagents when steer replacement dispatch fails.
- Implemented tests to ensure that the announce behavior is restored correctly and that the suppression reason is cleared as expected.
- Enhanced the subagent registry with a new function to clear steer restart suppression.
- Updated related components to support the new test scenarios.
* fix(subagents): replace stop command with kill in slash commands and documentation
- Updated the `/subagents` command to replace `stop` with `kill` for consistency in controlling sub-agent runs.
- Modified related documentation to reflect the change in command usage.
- Removed legacy timeoutSeconds references from the sessions-spawn-tool schema and tests to streamline timeout handling.
- Enhanced tests to ensure correct behavior of the updated commands and their interactions.
* feat(tests): add unit tests for readLatestAssistantReply function
- Introduced a new test file for the `readLatestAssistantReply` function to validate its behavior with various message scenarios.
- Implemented tests to ensure the function correctly retrieves the latest assistant message and handles cases where the latest message has no text.
- Mocked the gateway call to simulate different message histories for comprehensive testing.
* feat(tests): enhance subagent kill-all cascade tests and announce formatting
- Added a new test to verify that the `kill-all` command cascades through ended parents to active descendants in subagents.
- Updated the subagent announce formatting tests to reflect changes in message structure, including the replacement of "Findings:" with "Result:" and the addition of new expectations for message content.
- Improved the handling of long findings and stats in the announce formatting logic to ensure concise output.
- Refactored related functions to enhance clarity and maintainability in the subagent registry and tools.
* refactor(subagent): update announce formatting and remove unused constants
- Modified the subagent announce formatting to replace "Findings:" with "Result:" and adjusted related expectations in tests.
- Removed constants for maximum announce findings characters and summary words, simplifying the announcement logic.
- Updated the handling of findings to retain full content instead of truncating, ensuring more informative outputs.
- Cleaned up unused imports in the commands-subagents file to enhance code clarity.
* feat(tests): enhance billing error handling in user-facing text
- Added tests to ensure that normal text mentioning billing plans is not rewritten, preserving user context.
- Updated the `isBillingErrorMessage` and `sanitizeUserFacingText` functions to improve handling of billing-related messages.
- Introduced new test cases for various scenarios involving billing messages to ensure accurate processing and output.
- Enhanced the subagent announce flow to correctly manage active descendant runs, preventing premature announcements.
* feat(subagent): enhance workflow guidance and auto-announcement clarity
- Added a new guideline in the subagent system prompt to emphasize trust in push-based completion, discouraging busy polling for status updates.
- Updated documentation to clarify that sub-agents will automatically announce their results, improving user understanding of the workflow.
- Enhanced tests to verify the new guidance on avoiding polling loops and to ensure the accuracy of the updated prompts.
* fix(cron): avoid announcing interim subagent spawn acks
* chore: clean post-rebase imports
* fix(cron): fall back to child replies when parent stays interim
* fix(subagents): make active-run guidance advisory
* fix(subagents): update announce flow to handle active descendants and enhance test coverage
- Modified the announce flow to defer announcements when active descendant runs are present, ensuring accurate status reporting.
- Updated tests to verify the new behavior, including scenarios where no fallback requester is available and ensuring proper handling of finished subagents.
- Enhanced the announce formatting to include an `expectFinal` flag for better clarity in the announcement process.
* fix(subagents): enhance announce flow and formatting for user updates
- Updated the announce flow to provide clearer instructions for user updates based on active subagent runs and requester context.
- Refactored the announcement logic to improve clarity and ensure internal context remains private.
- Enhanced tests to verify the new message expectations and formatting, including updated prompts for user-facing updates.
- Introduced a new function to build reply instructions based on session context, improving the overall announcement process.
* fix: resolve prep blockers and changelog placement (#14447) (thanks @tyler6204)
* fix: restore cron delivery-plan import after rebase (#14447) (thanks @tyler6204)
* fix: resolve test failures from rebase conflicts (#14447) (thanks @tyler6204)
* fix: apply formatting after rebase (#14447) (thanks @tyler6204)
761 lines
27 KiB
TypeScript
761 lines
27 KiB
TypeScript
import { beforeEach, describe, expect, it, vi } from "vitest";
|
|
|
|
const agentSpy = vi.fn(async () => ({ runId: "run-main", status: "ok" }));
|
|
const sessionsDeleteSpy = vi.fn();
|
|
const readLatestAssistantReplyMock = vi.fn(async () => "raw subagent reply");
|
|
const embeddedRunMock = {
|
|
isEmbeddedPiRunActive: vi.fn(() => false),
|
|
isEmbeddedPiRunStreaming: vi.fn(() => false),
|
|
queueEmbeddedPiMessage: vi.fn(() => false),
|
|
waitForEmbeddedPiRunEnd: vi.fn(async () => true),
|
|
};
|
|
const subagentRegistryMock = {
|
|
isSubagentSessionRunActive: vi.fn(() => true),
|
|
countActiveDescendantRuns: vi.fn(() => 0),
|
|
resolveRequesterForChildSession: vi.fn(() => null),
|
|
};
|
|
let sessionStore: Record<string, Record<string, unknown>> = {};
|
|
let configOverride: ReturnType<(typeof import("../config/config.js"))["loadConfig"]> = {
|
|
session: {
|
|
mainKey: "main",
|
|
scope: "per-sender",
|
|
},
|
|
};
|
|
|
|
vi.mock("../gateway/call.js", () => ({
|
|
callGateway: vi.fn(async (req: unknown) => {
|
|
const typed = req as { method?: string; params?: { message?: string; sessionKey?: string } };
|
|
if (typed.method === "agent") {
|
|
return await agentSpy(typed);
|
|
}
|
|
if (typed.method === "agent.wait") {
|
|
return { status: "error", startedAt: 10, endedAt: 20, error: "boom" };
|
|
}
|
|
if (typed.method === "sessions.patch") {
|
|
return {};
|
|
}
|
|
if (typed.method === "sessions.delete") {
|
|
sessionsDeleteSpy(typed);
|
|
return {};
|
|
}
|
|
return {};
|
|
}),
|
|
}));
|
|
|
|
vi.mock("./tools/agent-step.js", () => ({
|
|
readLatestAssistantReply: readLatestAssistantReplyMock,
|
|
}));
|
|
|
|
vi.mock("../config/sessions.js", () => ({
|
|
loadSessionStore: vi.fn(() => sessionStore),
|
|
resolveAgentIdFromSessionKey: () => "main",
|
|
resolveStorePath: () => "/tmp/sessions.json",
|
|
resolveMainSessionKey: () => "agent:main:main",
|
|
readSessionUpdatedAt: vi.fn(() => undefined),
|
|
recordSessionMetaFromInbound: vi.fn().mockResolvedValue(undefined),
|
|
}));
|
|
|
|
vi.mock("./pi-embedded.js", () => embeddedRunMock);
|
|
|
|
vi.mock("./subagent-registry.js", () => subagentRegistryMock);
|
|
|
|
vi.mock("../config/config.js", async (importOriginal) => {
|
|
const actual = await importOriginal<typeof import("../config/config.js")>();
|
|
return {
|
|
...actual,
|
|
loadConfig: () => configOverride,
|
|
};
|
|
});
|
|
|
|
describe("subagent announce formatting", () => {
|
|
beforeEach(() => {
|
|
agentSpy.mockClear();
|
|
sessionsDeleteSpy.mockClear();
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReset().mockReturnValue(false);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReset().mockReturnValue(false);
|
|
embeddedRunMock.queueEmbeddedPiMessage.mockReset().mockReturnValue(false);
|
|
embeddedRunMock.waitForEmbeddedPiRunEnd.mockReset().mockResolvedValue(true);
|
|
subagentRegistryMock.isSubagentSessionRunActive.mockReset().mockReturnValue(true);
|
|
subagentRegistryMock.countActiveDescendantRuns.mockReset().mockReturnValue(0);
|
|
subagentRegistryMock.resolveRequesterForChildSession.mockReset().mockReturnValue(null);
|
|
readLatestAssistantReplyMock.mockReset().mockResolvedValue("raw subagent reply");
|
|
sessionStore = {};
|
|
configOverride = {
|
|
session: {
|
|
mainKey: "main",
|
|
scope: "per-sender",
|
|
},
|
|
};
|
|
});
|
|
|
|
it("sends instructional message to main agent with status and findings", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
sessionStore = {
|
|
"agent:main:subagent:test": {
|
|
sessionId: "child-session-123",
|
|
},
|
|
};
|
|
await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-123",
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: true,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
});
|
|
|
|
expect(agentSpy).toHaveBeenCalled();
|
|
const call = agentSpy.mock.calls[0]?.[0] as {
|
|
params?: { message?: string; sessionKey?: string };
|
|
};
|
|
const msg = call?.params?.message as string;
|
|
expect(call?.params?.sessionKey).toBe("agent:main:main");
|
|
expect(msg).toContain("[System Message]");
|
|
expect(msg).toContain("[sessionId: child-session-123]");
|
|
expect(msg).toContain("subagent task");
|
|
expect(msg).toContain("failed");
|
|
expect(msg).toContain("boom");
|
|
expect(msg).toContain("Result:");
|
|
expect(msg).toContain("raw subagent reply");
|
|
expect(msg).toContain("Stats:");
|
|
expect(msg).toContain("A completed subagent task is ready for user delivery.");
|
|
expect(msg).toContain("Convert the result above into your normal assistant voice");
|
|
expect(msg).toContain("Keep this internal context private");
|
|
});
|
|
|
|
it("includes success status when outcome is ok", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
// Use waitForCompletion: false so it uses the provided outcome instead of calling agent.wait
|
|
await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-456",
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: { message?: string } };
|
|
const msg = call?.params?.message as string;
|
|
expect(msg).toContain("completed successfully");
|
|
});
|
|
|
|
it("keeps full findings and includes compact stats", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
sessionStore = {
|
|
"agent:main:subagent:test": {
|
|
sessionId: "child-session-usage",
|
|
inputTokens: 12,
|
|
outputTokens: 1000,
|
|
totalTokens: 197000,
|
|
},
|
|
};
|
|
readLatestAssistantReplyMock.mockResolvedValue(
|
|
Array.from({ length: 140 }, (_, index) => `step-${index}`).join(" "),
|
|
);
|
|
|
|
await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-usage",
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: { message?: string } };
|
|
const msg = call?.params?.message as string;
|
|
expect(msg).toContain("Result:");
|
|
expect(msg).toContain("Stats:");
|
|
expect(msg).toContain("tokens 1.0k (in 12 / out 1.0k)");
|
|
expect(msg).toContain("prompt/cache 197.0k");
|
|
expect(msg).toContain("[sessionId: child-session-usage]");
|
|
expect(msg).toContain("A completed subagent task is ready for user delivery.");
|
|
expect(msg).toContain(
|
|
"Reply ONLY: NO_REPLY if this exact result was already delivered to the user in this same turn.",
|
|
);
|
|
expect(msg).toContain("step-0");
|
|
expect(msg).toContain("step-139");
|
|
});
|
|
|
|
it("steers announcements into an active run when queue mode is steer", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(true);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReturnValue(true);
|
|
embeddedRunMock.queueEmbeddedPiMessage.mockReturnValue(true);
|
|
sessionStore = {
|
|
"agent:main:main": {
|
|
sessionId: "session-123",
|
|
lastChannel: "whatsapp",
|
|
lastTo: "+1555",
|
|
queueMode: "steer",
|
|
},
|
|
};
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-789",
|
|
requesterSessionKey: "main",
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(true);
|
|
expect(embeddedRunMock.queueEmbeddedPiMessage).toHaveBeenCalledWith(
|
|
"session-123",
|
|
expect.stringContaining("[System Message]"),
|
|
);
|
|
expect(agentSpy).not.toHaveBeenCalled();
|
|
});
|
|
|
|
it("queues announce delivery with origin account routing", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(true);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReturnValue(false);
|
|
sessionStore = {
|
|
"agent:main:main": {
|
|
sessionId: "session-456",
|
|
lastChannel: "whatsapp",
|
|
lastTo: "+1555",
|
|
lastAccountId: "kev",
|
|
queueMode: "collect",
|
|
queueDebounceMs: 0,
|
|
},
|
|
};
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-999",
|
|
requesterSessionKey: "main",
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(true);
|
|
await expect.poll(() => agentSpy.mock.calls.length).toBe(1);
|
|
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: Record<string, unknown> };
|
|
expect(call?.params?.channel).toBe("whatsapp");
|
|
expect(call?.params?.to).toBe("+1555");
|
|
expect(call?.params?.accountId).toBe("kev");
|
|
});
|
|
|
|
it("queues announce delivery back into requester subagent session", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(true);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReturnValue(false);
|
|
sessionStore = {
|
|
"agent:main:subagent:orchestrator": {
|
|
sessionId: "session-orchestrator",
|
|
spawnDepth: 1,
|
|
queueMode: "collect",
|
|
queueDebounceMs: 0,
|
|
},
|
|
};
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:worker",
|
|
childRunId: "run-worker-queued",
|
|
requesterSessionKey: "agent:main:subagent:orchestrator",
|
|
requesterDisplayKey: "agent:main:subagent:orchestrator",
|
|
requesterOrigin: { channel: "whatsapp", to: "+1555", accountId: "acct" },
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(true);
|
|
await expect.poll(() => agentSpy.mock.calls.length).toBe(1);
|
|
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: Record<string, unknown> };
|
|
expect(call?.params?.sessionKey).toBe("agent:main:subagent:orchestrator");
|
|
expect(call?.params?.deliver).toBe(false);
|
|
expect(call?.params?.channel).toBeUndefined();
|
|
expect(call?.params?.to).toBeUndefined();
|
|
});
|
|
|
|
it("includes threadId when origin has an active topic/thread", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(true);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReturnValue(false);
|
|
sessionStore = {
|
|
"agent:main:main": {
|
|
sessionId: "session-thread",
|
|
lastChannel: "telegram",
|
|
lastTo: "telegram:123",
|
|
lastThreadId: 42,
|
|
queueMode: "collect",
|
|
queueDebounceMs: 0,
|
|
},
|
|
};
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-thread",
|
|
requesterSessionKey: "main",
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(true);
|
|
await expect.poll(() => agentSpy.mock.calls.length).toBe(1);
|
|
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: Record<string, unknown> };
|
|
expect(call?.params?.channel).toBe("telegram");
|
|
expect(call?.params?.to).toBe("telegram:123");
|
|
expect(call?.params?.threadId).toBe("42");
|
|
});
|
|
|
|
it("prefers requesterOrigin.threadId over session entry threadId", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(true);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReturnValue(false);
|
|
sessionStore = {
|
|
"agent:main:main": {
|
|
sessionId: "session-thread-override",
|
|
lastChannel: "telegram",
|
|
lastTo: "telegram:123",
|
|
lastThreadId: 42,
|
|
queueMode: "collect",
|
|
queueDebounceMs: 0,
|
|
},
|
|
};
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-thread-override",
|
|
requesterSessionKey: "main",
|
|
requesterDisplayKey: "main",
|
|
requesterOrigin: {
|
|
channel: "telegram",
|
|
to: "telegram:123",
|
|
threadId: 99,
|
|
},
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(true);
|
|
await expect.poll(() => agentSpy.mock.calls.length).toBe(1);
|
|
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: Record<string, unknown> };
|
|
expect(call?.params?.threadId).toBe("99");
|
|
});
|
|
|
|
it("splits collect-mode queues when accountId differs", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(true);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReturnValue(false);
|
|
sessionStore = {
|
|
"agent:main:main": {
|
|
sessionId: "session-acc-split",
|
|
lastChannel: "whatsapp",
|
|
lastTo: "+1555",
|
|
queueMode: "collect",
|
|
queueDebounceMs: 0,
|
|
},
|
|
};
|
|
|
|
await Promise.all([
|
|
runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test-a",
|
|
childRunId: "run-a",
|
|
requesterSessionKey: "main",
|
|
requesterDisplayKey: "main",
|
|
requesterOrigin: { accountId: "acct-a" },
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
}),
|
|
runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test-b",
|
|
childRunId: "run-b",
|
|
requesterSessionKey: "main",
|
|
requesterDisplayKey: "main",
|
|
requesterOrigin: { accountId: "acct-b" },
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
}),
|
|
]);
|
|
|
|
await expect.poll(() => agentSpy.mock.calls.length).toBe(2);
|
|
expect(agentSpy).toHaveBeenCalledTimes(2);
|
|
const accountIds = agentSpy.mock.calls.map(
|
|
(call) => (call?.[0] as { params?: { accountId?: string } })?.params?.accountId,
|
|
);
|
|
expect(accountIds).toEqual(expect.arrayContaining(["acct-a", "acct-b"]));
|
|
});
|
|
|
|
it("uses requester origin for direct announce when not queued", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(false);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReturnValue(false);
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-direct",
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterOrigin: { channel: "whatsapp", accountId: "acct-123" },
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(true);
|
|
const call = agentSpy.mock.calls[0]?.[0] as {
|
|
params?: Record<string, unknown>;
|
|
expectFinal?: boolean;
|
|
};
|
|
expect(call?.params?.channel).toBe("whatsapp");
|
|
expect(call?.params?.accountId).toBe("acct-123");
|
|
expect(call?.expectFinal).toBe(true);
|
|
});
|
|
|
|
it("injects direct announce into requester subagent session instead of chat channel", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(false);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReturnValue(false);
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:worker",
|
|
childRunId: "run-worker",
|
|
requesterSessionKey: "agent:main:subagent:orchestrator",
|
|
requesterOrigin: { channel: "whatsapp", accountId: "acct-123", to: "+1555" },
|
|
requesterDisplayKey: "agent:main:subagent:orchestrator",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(true);
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: Record<string, unknown> };
|
|
expect(call?.params?.sessionKey).toBe("agent:main:subagent:orchestrator");
|
|
expect(call?.params?.deliver).toBe(false);
|
|
expect(call?.params?.channel).toBeUndefined();
|
|
expect(call?.params?.to).toBeUndefined();
|
|
});
|
|
|
|
it("retries reading subagent output when early lifecycle completion had no text", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValueOnce(true).mockReturnValue(false);
|
|
embeddedRunMock.waitForEmbeddedPiRunEnd.mockResolvedValue(true);
|
|
readLatestAssistantReplyMock
|
|
.mockResolvedValueOnce(undefined)
|
|
.mockResolvedValueOnce("Read #12 complete.");
|
|
sessionStore = {
|
|
"agent:main:subagent:test": {
|
|
sessionId: "child-session-1",
|
|
},
|
|
};
|
|
|
|
await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-child",
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterDisplayKey: "main",
|
|
task: "context-stress-test",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(embeddedRunMock.waitForEmbeddedPiRunEnd).toHaveBeenCalledWith("child-session-1", 1000);
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: { message?: string } };
|
|
expect(call?.params?.message).toContain("Read #12 complete.");
|
|
expect(call?.params?.message).not.toContain("(no output)");
|
|
});
|
|
|
|
it("uses advisory guidance when sibling subagents are still active", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
subagentRegistryMock.countActiveDescendantRuns.mockImplementation((sessionKey: string) =>
|
|
sessionKey === "agent:main:main" ? 2 : 0,
|
|
);
|
|
|
|
await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-child",
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: { message?: string } };
|
|
const msg = call?.params?.message as string;
|
|
expect(msg).toContain("There are still 2 active subagent runs for this session.");
|
|
expect(msg).toContain(
|
|
"If they are part of the same workflow, wait for the remaining results before sending a user update.",
|
|
);
|
|
expect(msg).toContain("If they are unrelated, respond normally using only the result above.");
|
|
});
|
|
|
|
it("defers announce while the finished run still has active descendants", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
subagentRegistryMock.countActiveDescendantRuns.mockImplementation((sessionKey: string) =>
|
|
sessionKey === "agent:main:subagent:parent" ? 1 : 0,
|
|
);
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:parent",
|
|
childRunId: "run-parent",
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(false);
|
|
expect(agentSpy).not.toHaveBeenCalled();
|
|
});
|
|
|
|
it("bubbles child announce to parent requester when requester subagent already ended", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
subagentRegistryMock.isSubagentSessionRunActive.mockReturnValue(false);
|
|
subagentRegistryMock.resolveRequesterForChildSession.mockReturnValue({
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterOrigin: { channel: "whatsapp", to: "+1555", accountId: "acct-main" },
|
|
});
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:leaf",
|
|
childRunId: "run-leaf",
|
|
requesterSessionKey: "agent:main:subagent:orchestrator",
|
|
requesterDisplayKey: "agent:main:subagent:orchestrator",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(true);
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: Record<string, unknown> };
|
|
expect(call?.params?.sessionKey).toBe("agent:main:main");
|
|
expect(call?.params?.deliver).toBe(true);
|
|
expect(call?.params?.channel).toBe("whatsapp");
|
|
expect(call?.params?.to).toBe("+1555");
|
|
expect(call?.params?.accountId).toBe("acct-main");
|
|
});
|
|
|
|
it("keeps announce retryable when ended requester subagent has no fallback requester", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
subagentRegistryMock.isSubagentSessionRunActive.mockReturnValue(false);
|
|
subagentRegistryMock.resolveRequesterForChildSession.mockReturnValue(null);
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:leaf",
|
|
childRunId: "run-leaf-missing-fallback",
|
|
requesterSessionKey: "agent:main:subagent:orchestrator",
|
|
requesterDisplayKey: "agent:main:subagent:orchestrator",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "delete",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(false);
|
|
expect(subagentRegistryMock.resolveRequesterForChildSession).toHaveBeenCalledWith(
|
|
"agent:main:subagent:orchestrator",
|
|
);
|
|
expect(agentSpy).not.toHaveBeenCalled();
|
|
expect(sessionsDeleteSpy).not.toHaveBeenCalled();
|
|
});
|
|
|
|
it("defers announce when child run is still active after wait timeout", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(true);
|
|
embeddedRunMock.waitForEmbeddedPiRunEnd.mockResolvedValue(false);
|
|
sessionStore = {
|
|
"agent:main:subagent:test": {
|
|
sessionId: "child-session-active",
|
|
},
|
|
};
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-child-active",
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterDisplayKey: "main",
|
|
task: "context-stress-test",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(false);
|
|
expect(agentSpy).not.toHaveBeenCalled();
|
|
});
|
|
|
|
it("does not delete child session when announce is deferred for an active run", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(true);
|
|
embeddedRunMock.waitForEmbeddedPiRunEnd.mockResolvedValue(false);
|
|
sessionStore = {
|
|
"agent:main:subagent:test": {
|
|
sessionId: "child-session-active",
|
|
},
|
|
};
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-child-active-delete",
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterDisplayKey: "main",
|
|
task: "context-stress-test",
|
|
timeoutMs: 1000,
|
|
cleanup: "delete",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(false);
|
|
expect(sessionsDeleteSpy).not.toHaveBeenCalled();
|
|
});
|
|
|
|
it("normalizes requesterOrigin for direct announce delivery", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(false);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReturnValue(false);
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-direct-origin",
|
|
requesterSessionKey: "agent:main:main",
|
|
requesterOrigin: { channel: " whatsapp ", accountId: " acct-987 " },
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(true);
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: Record<string, unknown> };
|
|
expect(call?.params?.channel).toBe("whatsapp");
|
|
expect(call?.params?.accountId).toBe("acct-987");
|
|
});
|
|
|
|
it("prefers requesterOrigin channel over stale session lastChannel in queued announce", async () => {
|
|
const { runSubagentAnnounceFlow } = await import("./subagent-announce.js");
|
|
embeddedRunMock.isEmbeddedPiRunActive.mockReturnValue(true);
|
|
embeddedRunMock.isEmbeddedPiRunStreaming.mockReturnValue(false);
|
|
// Session store has stale whatsapp channel, but the requesterOrigin says bluebubbles.
|
|
sessionStore = {
|
|
"agent:main:main": {
|
|
sessionId: "session-stale",
|
|
lastChannel: "whatsapp",
|
|
queueMode: "collect",
|
|
queueDebounceMs: 0,
|
|
},
|
|
};
|
|
|
|
const didAnnounce = await runSubagentAnnounceFlow({
|
|
childSessionKey: "agent:main:subagent:test",
|
|
childRunId: "run-stale-channel",
|
|
requesterSessionKey: "main",
|
|
requesterOrigin: { channel: "bluebubbles", to: "bluebubbles:chat_guid:123" },
|
|
requesterDisplayKey: "main",
|
|
task: "do thing",
|
|
timeoutMs: 1000,
|
|
cleanup: "keep",
|
|
waitForCompletion: false,
|
|
startedAt: 10,
|
|
endedAt: 20,
|
|
outcome: { status: "ok" },
|
|
});
|
|
|
|
expect(didAnnounce).toBe(true);
|
|
await expect.poll(() => agentSpy.mock.calls.length).toBe(1);
|
|
|
|
const call = agentSpy.mock.calls[0]?.[0] as { params?: Record<string, unknown> };
|
|
// The channel should match requesterOrigin, NOT the stale session entry.
|
|
expect(call?.params?.channel).toBe("bluebubbles");
|
|
expect(call?.params?.to).toBe("bluebubbles:chat_guid:123");
|
|
});
|
|
});
|