Tool Memory
How tools expose imperative state and function-call interfaces to memory layers and LLMs.
Overview
Noetic provides two patterns for connecting tools and memory layers. Both inject state into the LLM's context via recall, but they differ in who writes the state:
| Tool-Owned Memory | Function-Call Memory | |
|---|---|---|
| Who writes state | Tool code (imperative) | The LLM (via function calls) |
| Layer creation | Auto-generated by toolMemoryLayer() | Hand-written MemoryLayer |
| State access | toolCtx.memory.get() / .set() | findFunctionCall() in store() hook |
| Scope | 'execution' by default | Any scope you choose |
| Best for | Tracking tool side effects | Scratchpads, self-managed memory |
Tool-Owned Memory
Tools can declare a memory property. The runtime collects these declarations and auto-generates a MemoryLayer for each unique memory.id. Tools then read and write state imperatively during execution.
ToolMemoryDeclaration
import type { ToolMemoryDeclaration } from '@noetic/core';
interface ToolMemoryDeclaration<TState = unknown> {
/** Shared id -- tools with the same id share state. Defaults to tool.name. */
id?: string;
/** Factory for the initial state. */
init: () => TState;
/** Project state into the LLM context. Return null to omit. */
recall: (state: TState) => string | null;
}The recall function returns string | null -- a shorthand the runtime accepts in place of a full RecallResult. Returning a string wraps it as a developer message item automatically.
Registering Tool Memory
Pass your tools to toolMemoryLayer() to generate the layers, then include both in your agent:
import { react, toolMemoryLayer } from '@noetic/core';
const tools = [writeTodos, updateTodo, listTodos];
const agent = react({
model: 'openrouter/anthropic/claude-sonnet-4',
instructions: 'You are a task planner.',
tools,
memory: [
...toolMemoryLayer(tools),
],
});toolMemoryLayer() creates one layer per unique memory.id. Tools that share the same id share the same state.
Reading and Writing State
Inside a tool's execute function, use toolCtx.memory to access layer state:
interface ToolMemory {
get<T>(layerId: string): T | undefined;
set<T>(layerId: string, state: T): void;
}get(id)returns the current state for the layer with that id, orundefinedif uninitialized.set(id, state)replaces the state. The nextrecallcycle picks up the new value.
Full Example: Todo Tools
Multiple tools share a single memory layer by declaring the same memory.id. Each tool reads and writes through toolCtx.memory:
import { z } from 'zod';
import { tool } from '@noetic/core';
import type { ToolMemoryDeclaration } from '@noetic/core';
//#region Types
interface TodoItem {
id: string;
description: string;
status: 'pending' | 'in_progress' | 'completed';
}
interface TodoState {
items: TodoItem[];
}
//#endregion
//#region Memory Declaration
const TODO_ID = 'todos';
const todoMemory: ToolMemoryDeclaration<TodoState> = {
id: TODO_ID,
init: () => ({ items: [] }),
recall: (state) => {
if (!state.items.length) {
return null;
}
const lines = state.items.map(
(item) => `[${item.status}] ${item.id}: ${item.description}`,
);
return `<todos>\n${lines.join('\n')}\n</todos>`;
},
};
//#endregion
//#region Tools
const writeTodos = tool({
name: 'write_todos',
description: 'Create new todo items.',
input: z.object({
items: z.array(z.string()),
}),
output: z.array(z.object({
id: z.string(),
description: z.string(),
status: z.string(),
})),
memory: todoMemory,
execute: async (args, toolCtx) => {
const state = toolCtx.memory.get<TodoState>(TODO_ID) ?? { items: [] };
const newItems: TodoItem[] = args.items.map((desc) => ({
id: crypto.randomUUID().slice(0, 8),
description: desc,
status: 'pending' as const,
}));
toolCtx.memory.set(TODO_ID, {
items: [...state.items, ...newItems],
});
return newItems;
},
});
const updateTodo = tool({
name: 'update_todo',
description: 'Update the status of a todo item.',
input: z.object({
id: z.string(),
status: z.enum(['pending', 'in_progress', 'completed']),
}),
output: z.object({
id: z.string(),
description: z.string(),
status: z.string(),
}),
memory: todoMemory,
execute: async (args, toolCtx) => {
const state = toolCtx.memory.get<TodoState>(TODO_ID) ?? { items: [] };
const item = state.items.find((i) => i.id === args.id);
if (!item) {
throw new Error(`Todo not found: ${args.id}`);
}
item.status = args.status;
toolCtx.memory.set(TODO_ID, state);
return item;
},
});
//#endregionBoth writeTodos and updateTodo declare memory: todoMemory with the same id. The runtime generates a single shared layer. When either tool calls toolCtx.memory.set(TODO_ID, ...), the updated state is projected into the LLM's context on the next turn via the recall function.
Scope and Lifetime
toolMemoryLayer() defaults to 'execution' scope -- state lives only for the current agent run and is discarded afterward. If you need persistence across runs, write a custom memory layer with 'thread' or 'resource' scope instead.
Function-Call Memory
In this pattern, the LLM updates memory state by emitting a function call (like calling a tool), and the memory layer's store() hook intercepts it. No formal tool schema is registered -- the layer itself acts as a pseudo-tool.
The built-in workingMemory() layer uses this pattern: the LLM calls updateWorkingMemory({...}), and the store hook parses the arguments and merges them into state.
How It Works
- Your system prompt instructs the LLM to call a specific function name to update state
- The LLM emits a
function_callitem with that name - Your
store()hook usesfindFunctionCall()to extract the arguments - State is updated from the parsed arguments
findFunctionCall Utility
import { findFunctionCall } from '@noetic/core';
// Searches newItems for the first function_call matching the name.
// Returns parsed JSON arguments as Record<string, unknown>, or null.
const args = findFunctionCall(newItems, 'updateEntityMemory');Example: Entity-Extraction Layer
A custom layer that lets the LLM store discovered entities by calling updateEntities:
import type { MemoryLayer } from '@noetic/core';
import { Slot, findFunctionCall } from '@noetic/core';
import { createMessage, estimateTokens } from '@noetic/core';
interface Entity {
name: string;
type: string;
notes: string;
}
interface EntityState {
entities: Entity[];
}
function entityMemory(): MemoryLayer<EntityState> {
return {
id: 'entity-memory',
name: 'Entity Memory',
slot: Slot.ENTITY,
scope: 'thread',
budget: { min: 200, max: 1000 },
hooks: {
async init({ storage }) {
const saved = await storage.get<EntityState>('state');
return {
state: saved ?? { entities: [] },
};
},
async recall({ state }) {
if (!state.entities.length) {
return null;
}
const text = state.entities
.map((e) => `- ${e.name} (${e.type}): ${e.notes}`)
.join('\n');
const content = `<known_entities>\n${text}\n</known_entities>`;
return {
items: [createMessage(content, 'developer')],
tokenCount: estimateTokens(content),
};
},
async store({ newItems, state, storage }) {
const args = findFunctionCall(newItems, 'updateEntities');
if (!args) {
return;
}
const incoming = (args.entities ?? []) as Entity[];
const updated: EntityState = {
entities: [...state.entities, ...incoming],
};
await storage.set('state', updated);
return { state: updated };
},
},
};
}Register the layer and instruct the LLM in the system prompt:
const agent = react({
model: 'openrouter/anthropic/claude-sonnet-4',
instructions: `You are a research assistant.
When you discover important entities (people, organizations, concepts),
call updateEntities to remember them:
updateEntities({ entities: [{ name, type, notes }] })`,
memory: [entityMemory()],
});Because no tool schema is registered for updateEntities, the LLM relies entirely on the system prompt instructions. This is the key trade-off: function-call memory is simpler to set up but depends on clear prompting.
Combining Patterns
A single agent can use both patterns together. Tool-owned memory tracks state that tools modify imperatively, while function-call memory gives the LLM a channel to update state directly:
const tools = [writeTodos, updateTodo];
const agent = react({
model: 'openrouter/anthropic/claude-sonnet-4',
instructions: 'You are a planner. Use todos to track tasks. Call updateNotes to save observations.',
tools,
memory: [
...toolMemoryLayer(tools), // imperative: tools write todo state
notesMemory(), // function-call: LLM writes notes
],
});