// constrain the agent, not the intelligence

NOETIC

Composable primitives. Clean code from 10 lines to 10,000.

Start with pre-built patterns — ReAct, task trees, dual-agent loops. Or compose your own. Reactive memory keeps context windows manageable automatically.

$ bun add @noetic/core
Build your first agent →GitHub ★
react-agent.ts
import { AgentHarness, react } from '@noetic/core';

const agent = react({
  model: 'gpt-4o',
  tools: [searchTool, calcTool],
  maxSteps: 10,
});

const harness = new AgentHarness({
  name: 'researcher',
  initialStep: agent,
  params: {},
});

await harness.execute('Find recent AI news');
const { text } = await harness.getAgentResponse();
// core primitives

Meet the building blocks

A small set of composable primitives. Build any agent pattern by combining the pieces you need.

Reasoning loops, parallel workloads, sub-agents — all of it falls out of these seven. The ReAct pattern is 15 lines. A task tree is 40. You can read both in under a minute.

LOOPpromptllmtoolrununtil
LEGEND
steps
operators
// context management

Unparalleled memory management

Long multi-turn conversations without blowing up the context window.

Working memory, observation extraction, vector recall, episode summaries, durable checkpoints. Let Noetic handle it or build your own. Token costs stay predictable as conversations grow.

write ↓↑ readWORKING MEMORYcurrent turnOBSERVATIONALauto-extracted factsSEMANTIC RECALLvector storeEPISODICconversation summariesDURABLE STATEagent checkpointsLLMassembleView()raw history ≈ 6,000 tok → assembled context ≈ 680 tok
LEGEND
working layers
retrieval layers
persistence
Working Memory
Scratchpad for current turn
Observational Memory
Auto-extracted facts from conversation
Semantic Recall
Vector-indexed long-term storage
Episodic Memory
Past conversation summaries
Durable Task State
Persistent agent checkpoints
// ready to use

Batteries included

Common agent patterns built-in for convenience.

Each pattern is a composition of the primitives above — no special cases, no hidden behavior. Read the source. Fork it. The framework doesn't care.

ReAct patternthought → action → observeinputthoughtactionOBSERVEanswer
// read the source

Reasoning loop in 15 lines, full memory stack in 10. No boilerplate.

It's the same seven primitives from before. Once you know those, you can read — and change — anything.

react-loop.ts
import { any, loop, step, until } from '@noetic/core';

const reasonAndAct = loop({
  id: 'react-loop',
  steps: [
    step.llm({
      id: 'think',
      model: 'gpt-4o',
      tools: [searchTool, calcTool],
    }),
  ],
  until: any(until.noToolCalls(), until.maxSteps(10)),
});
// Observe → Think → Act — just primitives composed
// the landscape

What makes Noetic different?

LangChain
Magic on the way in. Black box on the way out.
LangGraph
Powerful. Also: now you're a graph theorist.
CrewAI
Works great until it doesn't.
AI SDK
Too magical a primitive to build anything with confidence.
Noetic
Seven primitives. Read it, extend it, ship it — it's just TypeScript.

OpenAI, Anthropic, local models, or a custom adapter. Bring your own provider.

// what's next

Eval Framework + RL Pipeline

Write evals as easily as Jest tests. Train agents with reinforcement learning.

Define what "good" looks like for your agent, run it against a dataset, and let the optimizer improve it. Same primitives. Same runtime. Just a feedback loop added.

eval-framework
Coming Soon
  EVAL RUN: agent-quality-v3
  ─────────────────────────────────

  ✓ PASS  responds to greeting          12ms
  ✓ PASS  uses search tool correctly    340ms
  ✗ FAIL  handles ambiguous query       280ms
  ✓ PASS  stays within token budget     890ms
  ✓ PASS  cites sources accurately      450ms

  Results: 4/5 passed (80%)

  RL PIPELINE ━━━━━━━━━━━━━━━ READY
  reward signal: accuracy + cost
  policy update: pending