Context & Event Log

The Context object tracks execution state, metrics, and conversation history.

Quick Example

import { step } from '@noetic/core';

const greet = step.run({ id: 'greet', execute: async (input: string, ctx) => {
  console.log(`Execution ${ctx.id}, step #${ctx.stepCount}`);
  console.log(`Tokens so far: ${ctx.tokens.total}`);

  // Append a user message to the event log
  ctx.itemLog.append({
    id: crypto.randomUUID(),
    type: 'message',
    role: 'user',
    content: [{ type: 'input_text', text: input }],
    status: 'completed',
  });

  return `Hello, ${input}!`;
}});

Every step in Noetic receives a Context object as its second argument. The context carries execution metadata, token budgets, the conversation item log, memory layer handles, channel methods, and lifecycle controls. It is the single source of truth for everything that has happened during a run.

Context Interface

type ContextShape<TMemory = ContextMemory, TState = unknown> = Context<TMemory, TState>;

The TMemory generic defaults to ContextMemory (an untyped record). Use InferMemory<typeof config> to supply a fully typed shape.

PropertyTypeDescription
idstringUnique identifier for this execution.
stepCountnumberNumber of steps executed so far.
tokensTokenUsageCumulative token counts (input, output, total).
elapsednumberWall-clock milliseconds since context creation.
costnumberCumulative cost estimate for LLM calls.
stateTStateMutable, generic state object. You can read and write to this freely.
memoryTMemoryLayer provides keyed by layer ID. See Memory below.
parentContext | nullParent context when running inside a spawn.
depthnumberNesting depth (0 for root).
spanSpanThe active tracing span for this execution.
threadIdstringConversation thread identifier.
resourceIdstring | undefinedOptional resource identifier (e.g., a user or tenant ID).
itemLogItemLogThe conversation event log.
lastStepMetaStepMeta | nullMetadata from the most recently completed step (tool calls, usage, cost).

Memory

The ctx.memory property is a readonly object keyed by layer ID. Each key maps to a handle containing the data projections and callable functions that the layer declared in its provides field.

declare const ctx: Context<Record<string, { snapshot: unknown; update: (next: unknown) => Promise<void> }>>;

const snap = ctx.memory['working-memory'].snapshot;

await ctx.memory['working-memory'].update({ key: 'value' });

At runtime, data entries use getters (reading live state on each access), while function entries are async closures that validate input via Zod and may update the layer's internal state.

When layers are wrapped with the memory() builder, use InferMemory<typeof config> to get compile-time types:

import { memory, workingMemory, type InferMemory } from '@noetic/core';

const mem = memory([workingMemory()]);
type Mem = InferMemory<typeof mem>;

// ctx: Context<Mem> gives fully typed ctx.memory

TokenUsage

interface TokenUsage {
  input: number;
  output: number;
  total: number;
}

Item Log

The ItemLog is an append-only log of every message, tool call, and reasoning trace produced during an execution.

interface ItemLog {
  readonly items: ReadonlyArray<Item>;
  append(item: Item): void;
}
  • items -- read the full history at any time.
  • append(item) -- add a new item. The runtime also appends items automatically after each LLM step.

Channel Methods

Context exposes three methods for communicating over Channels:

MethodSignatureDescription
recvrecv<T>(channel: Channel<T>, opts?: { timeout?: number }): Promise<T>Wait for the next value. Throws channel_timeout if the timeout expires.
sendsend<T>(channel: Channel<T>, value: T): voidPush a value into a channel.
tryRecvtryRecv<T>(channel: Channel<T>): T | nullNon-blocking read. Returns null if nothing is available.
import { channel } from '@noetic/core';
import type { Context } from '@noetic/core';
import { z } from 'zod';

const approvals = channel('approvals', {
  schema: z.boolean(),
  mode: 'queue',
});

declare const ctx: Context;
ctx.send(approvals, true);
const approved = await ctx.recv(approvals, { timeout: 5_000 });
const maybe = ctx.tryRecv(approvals);

Lifecycle Controls

Checkpoint

declare const ctx: Context;
await ctx.checkpoint();

Persists the current execution state so it can be restored later. In the AgentHarness this is a no-op; durable agent harnesses use it for crash recovery.

Complete

declare const ctx: Context;
declare const finalValue: unknown;
ctx.complete(finalValue);

Signals that the execution has a result and should stop. After calling complete:

PropertyValue
ctx.completedtrue
ctx.completionValueThe value you passed

Abort

declare const ctx: Context;
ctx.abort('user cancelled');

Signals that the execution should be cancelled. After calling abort:

PropertyValue
ctx.abortedtrue
ctx.abortReasonThe reason string you passed

Loops check ctx.aborted at the top of every iteration and throw a cancelled error if set.

Item Types

Every entry in the item log is one of these discriminated union variants. All items share a base shape:

interface ItemBase {
  readonly id: string;
  readonly status: 'in_progress' | 'completed' | 'incomplete' | 'failed';
}

MessageItem

interface MessageItem extends ItemBase {
  readonly type: 'message';
  readonly role: 'user' | 'assistant' | 'system' | 'developer';
  readonly content: ContentPart[];
}

FunctionCallItem

interface FunctionCallItem extends ItemBase {
  readonly type: 'function_call';
  readonly callId: string;
  readonly name: string;
  readonly arguments: string;
}

FunctionCallOutputItem

interface FunctionCallOutputItem extends ItemBase {
  readonly type: 'function_call_output';
  readonly callId: string;
  readonly output: string;
}

ReasoningItem

interface ReasoningItem extends ItemBase {
  readonly type: 'reasoning';
  readonly content: ContentPart[];
  readonly summary?: ContentPart[];
  readonly encryptedContent?: string;
}

ExtensionItem

Custom item types using the prefix:name convention (e.g., noetic:analytics, openrouter:web_search):

interface ExtensionItem extends ItemBase {
  readonly type: `${string}:${string}`;
  readonly data: Record<string, unknown>;
}

For the complete reference of all item shapes, streaming events, and how they map to the OpenResponses specification, see Items & Events.

ContentPart

Content arrays use these discriminated variants:

typeFieldsPurpose
output_texttext: stringModel-generated text
input_texttext: stringUser-provided text
input_imageimageUrl: string, detail?: 'auto' | 'low' | 'high'User-provided image
input_filefileData?: string, fileId?: string | null, fileUrl?: string, filename?: stringUser-provided file
refusalrefusal: stringModel refusal message

Per-Layer Usage Breakdown

ctx.lastLayerUsage exposes how the most recent callModel decomposed the context window across its contributors. The runtime captures this snapshot after every successful LLM step and overwrites it on the next call.

declare const ctx: Context;
const usage = ctx.lastLayerUsage;
if (usage) {
  for (const layer of usage.layers) {
    console.log(`${layer.layerId}: ${layer.tokenCount} tokens`);
  }
  console.log(`system: ${usage.systemPromptTokens}, history: ${usage.historyTokens}`);
}

Each entry's tokenCount is self-reported by the layer's recall() output; systemPromptTokens, toolsTokens, and historyTokens are estimates derived from the rendered request. The sum is totalUsedTokens. The same snapshot is also surfaced on HarnessResponse.lastLayerUsage so external callers (CLIs, dashboards) can read it after the run completes without holding a Context reference.

See Context Types for the LastLayerUsage and LayerUsageEntry interfaces.

On this page