spawn
Run a child step in an isolated context with optional spawn-local memory layers.
Quick Example
import { spawn, step } from '@noetic/core';
const isolated = spawn({
id: 'sub-agent',
child: step.llm({
id: 'researcher',
model: 'gpt-4o',
instructions: 'Research the topic thoroughly.',
}),
});What It Does
spawn runs a child step in an isolated context. The child starts with an empty ItemLog by default, giving it a fresh conversation history and preventing it from polluting the parent context.
You can attach spawn-local memory layers via the optional memory field. These layers use onSpawn hooks to provide items to the child and onReturn hooks to transform results back to the parent. Spawn-local memory fully replaces parent layer propagation, ensuring complete isolation.
This is the primitive for sub-agents, delegation, and sandboxed execution.
API Reference
| Property | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique step identifier |
child | Step<TMemory, I, O> | Yes | The step to execute in isolation |
memory | MemoryConfig | MemoryLayer[] | No | Spawn-local memory layers (use the memory() builder for typed access) |
timeout | number | No | Timeout in milliseconds |
subprocess | SubprocessAdapter | No | Per-step adapter override; see SubprocessAdapter Routing below. |
Memory Layers
The memory field accepts either a raw MemoryLayer[] array or a MemoryConfig object produced by the memory() builder. The builder provides typed access to layer state and is the recommended approach.
Spawn-local memory layers control what the child sees and how results are transformed when the child completes.
onSpawnhooks provide items to the child's initial context. Each layer can inject items (e.g., relevant conversation history, cached knowledge, instructions) into the child's emptyItemLog.onReturnhooks transform the child's results before they flow back to the parent. Layers can summarize, filter, or restructure the output.
import { spawn, step } from '@noetic/core';
const withMemory = spawn({
id: 'research-agent',
child: step.llm({
id: 'researcher',
model: 'gpt-4o',
instructions: 'Research the topic thoroughly.',
}),
memory: [
{
id: 'context-layer',
name: 'Context Provider',
slot: 100,
scope: 'execution',
hooks: {
onSpawn: async ({ parentState, childCtx }) => ({
childState: parentState,
items: [
{
type: 'message',
role: 'user',
content: [
{
type: 'input_text',
text: 'Here is background context for your research.',
},
],
},
],
}),
onReturn: async ({ childState, childLog, parentState, result }) => ({
parentState: { ...parentState, lastResearch: result },
result: result,
}),
},
},
],
});Timeout
Set a maximum execution time for the child. If the timeout is exceeded, the spawn throws an error.
const spawned = spawn({
id: 'time-limited',
child: myStep,
timeout: 3e4, // 30 seconds
});Real-World Example: Research Delegation with Memory
import { spawn, step, until } from '@noetic/core';
const researchAgent = spawn({
id: 'delegated-research',
child: {
kind: 'loop',
id: 'research-loop',
steps: [step.llm({
id: 'researcher-llm',
model: 'gpt-4o',
instructions: 'You are a research assistant. Use tools to find information.',
tools: [searchTool, fetchTool],
})],
until: until.noToolCalls(),
maxIterations: 10,
},
memory: [
{
id: 'research-briefing',
name: 'Research Briefing',
slot: 100,
scope: 'execution',
hooks: {
onSpawn: async ({ parentState }) => ({
childState: null,
items: [
{
type: 'message',
role: 'user',
content: [
{
type: 'input_text',
text: `Research the following topic: ${parentState.topic}`,
},
],
},
],
}),
onReturn: async ({ childLog, parentState, result }) => ({
parentState: { ...parentState, findings: result },
result: result,
}),
},
},
],
timeout: 6e4, // 60 seconds
});Detached Spawn
For background sub-agents that run concurrently, use harness.detachedSpawn(). The parent continues immediately and receives a DetachedHandle to track the child.
import type { DetachedHandle } from '@noetic/core';
const handle = harness.detachedSpawn(subAgentStep, input, ctx);
// handle.status === 'running' — parent continues immediately
// Poll status
if (handle.status === 'completed') {
console.log(handle.result);
}
// Or await the result
const result = await handle.await(); // blocks until done
const result = await handle.await(5_000); // throws on timeoutIsolating background work from the parent's session
By default, detachedSpawn reuses the parent's threadId and resourceId,
so any items the child appends land in the parent's session.accumulatedItems
and replay in the parent's next turn. For long-running background sub-agents
whose work should NOT pollute the parent's history (e.g. a separate research
teammate), pass an overrides argument:
const handle = harness.detachedSpawn(subAgentStep, input, ctx, {
threadId: `teammate-${agentId}`,
});The child runs against a fresh per-teammate session log; the parent's history
remains uncontaminated. The handle and its .await() / .status semantics
are unchanged.
DetachedHandle API
| Property | Type | Description |
|---|---|---|
id | string | Unique handle identifier (child context ID) |
status | 'running' | 'completed' | 'failed' | Current execution status |
result | O | undefined | Child output (set when completed) |
error | string | undefined | Error message (set when failed) |
await(timeout?) | Promise<O> | Wait for completion, optionally with timeout |
Async Sub-Agent Pattern
Combine detached spawn with the loop inbox channel for a full async delegation workflow:
import { channel } from '@noetic/core';
import { z } from 'zod';
const inbox = channel('agent-inbox', { schema: z.string(), mode: 'queue' });
// In a tool's execute function — use the parent ctx passed to execute(args, ctx):
const handle = harness.detachedSpawn(subAgentStep, task, parentCtx);
handle.await().then((result) => {
harness.send(inbox, `[Sub-agent done] ${result}`, parentCtx);
});
// The parent loop uses the inbox to wake when results arrive:
const agentLoop = {
kind: 'loop',
id: 'async-agent',
steps: [llmStep],
until: until.noToolCalls(),
inbox,
parkTimeout: 3e4,
};Dynamic Delegation via Tool Calling
Give the LLM both a sync and async delegation tool. The LLM decides at runtime which to use based on the task — blocking when it needs the result immediately, or launching in the background when it can continue working.
import { channel, spawn, step, tool, until } from '@noetic/core';
import type { Context } from '@noetic/core';
import { z } from 'zod';
const inbox = channel('agent-inbox', { schema: z.string(), mode: 'queue' });
// Tool 1: Sync — blocks until sub-agent finishes, returns result directly
// The execute function receives the parent context as its second argument.
const delegateTool = tool({
name: 'delegate',
description: 'Run a sub-agent and wait for its result. Use when you need the answer now.',
input: z.object({ task: z.string() }),
output: z.string(),
execute: async (args: { task: string }, parentCtx: Context) => {
const child = step.llm({ id: 'sync-sub', model: 'gpt-4o', instructions: 'Answer concisely.' });
return harness.run(spawn({ id: 'sync-spawn', child }), args.task, parentCtx);
},
});
// Tool 2: Async — launches sub-agent in background, notifies via inbox when done
const launchTool = tool({
name: 'launch_agent',
description: 'Launch a sub-agent in the background. Use when you can continue other work.',
input: z.object({ task: z.string() }),
output: z.object({ agentId: z.string() }),
execute: async (args: { task: string }, parentCtx: Context) => {
const child = step.llm<string, string>({
id: 'async-sub', model: 'gpt-4o', instructions: 'Answer concisely.',
});
const handle = harness.detachedSpawn(child, args.task, parentCtx);
// Notify the parent loop when the sub-agent finishes
handle.await().then(
(result) => harness.send(inbox, `[Agent ${handle.id} done] ${result}`, parentCtx),
(err) => harness.send(inbox, `[Agent ${handle.id} failed] ${err}`, parentCtx),
);
return { agentId: handle.id };
},
});
// The agent loop: LLM chooses which tool to call based on the situation
const agent = {
kind: 'loop',
id: 'smart-delegator',
steps: [step.llm({
id: 'orchestrator',
model: 'gpt-4o',
instructions: `You are an orchestrator. You have two delegation tools:
- delegate: blocks and returns the result. Use for tasks you need answered before continuing.
- launch_agent: runs in background. Use when you can keep working while it runs.`,
tools: [delegateTool, launchTool],
})],
until: until.noToolCalls(),
inbox, // wakes the loop when background agents finish
parkTimeout: 3e4,
};The LLM sees both tools and their descriptions, then decides per-task:
- "I need this answer to continue" → calls
delegate(sync, blocks) - "Research this while I handle other things" → calls
launch_agent(async, continues)
When a background agent finishes, its result arrives via the inbox channel as a developer message. The loop wakes, the LLM sees the result, and incorporates it.
SubprocessAdapter Routing
Every step.run, spawn, and harness.detachedSpawn dispatches through a SubprocessAdapter. The harness defaults to createInMemorySubprocessAdapter() so zero-config callers keep their existing in-process behaviour; swap in a different adapter to change how a step actually runs.
import { spawn, step } from '@noetic/core';
import { createLocalSubprocessAdapter } from '@noetic/platform-node';
import { createFileStorage } from '@noetic/core';
// Run this spawn out-of-process with durable handle manifests.
const subprocess = createLocalSubprocessAdapter({
storage: createFileStorage({ root: `${process.env.HOME}/.noetic/subprocess` }),
});
const isolated = spawn({
id: 'sub-agent',
child: step.llm({ id: 'researcher', model: 'gpt-4o' }),
subprocess, // per-step override
});Resolution Precedence
When the interpreter dispatches a run or spawn, it resolves the adapter in this order:
- Per-call override — the
overrides.subprocessargument toharness.detachedSpawn(step, input, ctx, overrides). - Per-step override — the
subprocessfield onStepRunorStepSpawn. - Harness default —
harness.subprocess(defaults tocreateInMemorySubprocessAdapter()).
Other step kinds (llm, tool, branch, fork, provide, loop) always use the harness default.
Step Registry
When an adapter crosses a process boundary, the child runtime must locate the step body by id. Every step builder auto-registers its result in a shared registry (@noetic/core/runtime/step-registry). lookupStep(stepId) is the cross-process contract — the child process imports the same entry module as the parent and resolves the step by id before executing it.
Durable Handle Manifests
Adapters configured with a durable StorageAdapter persist a manifest per handle: handleId, stepId, serializedInput, executionId, and transport-specific identity (pid + pidStarttime for OS children; socketPath for IPC). On parent restart, adapter.listLive() rediscovers the still-running children and adapter.reattach(handleId) rebinds each handle. See the Durability page for the full restart flow.
provide vs spawn
Use provide when you want to attach memory layers to a subtree without isolating the child context. The child shares the parent's ItemLog and conversation history -- layers simply become available to all descendant steps.
Use spawn when you need full context isolation. The child starts with an empty ItemLog, and onSpawn / onReturn hooks control data flow across the boundary.
provide | spawn | |
|---|---|---|
| Child context | Shared with parent | Isolated (empty ItemLog) |
| Memory layers | Inherited by descendants | Spawn-local only |
| Use case | Attach layers to a subtree | Sub-agents, delegation, sandboxing |
Related
- provide -- attach memory layers without context isolation.
- fork -- parallel execution without context isolation.
- Loop & Until -- use a loop as a spawn child for iterative sub-agents.
- Overview -- how spawn fits into the seven primitives.