Steps

step.llm

Call a language model with tools, structured output, and system prompts.

Quick Example

import { step } from '@noetic/core';

const chat = step.llm({
  id: 'chat',
  model: 'gpt-4o',
  system: 'You are a helpful assistant.',
});

What It Does

step.llm calls a language model. You provide the model name and optionally a system prompt, tools, structured output schema, and generation parameters. The interpreter handles message formatting, tool execution loops, and response parsing.

API Reference

Options

PropertyTypeRequiredDescription
idstringYesUnique step identifier
modelstringYesModel identifier (e.g. 'gpt-4o', 'claude-sonnet-4-20250514')
systemstringNoSystem prompt prepended to the conversation
toolsTool[]NoTools the model may call
outputZodType<O>NoZod schema for structured output
paramsModelParamsNoGeneration parameters

ModelParams

PropertyTypeDescription
temperaturenumberSampling temperature
topPnumberNucleus sampling threshold
maxTokensnumberMaximum tokens to generate
stopSequencesstring[]Stop sequences

Tool Interface

Tools passed to step.llm follow the standard Noetic Tool interface. See step.tool for the full definition.

Structured Output

Provide a Zod schema to output and the model response will be parsed and validated at runtime.

import { z } from 'zod';
import { step } from '@noetic/core';

const extract = step.llm({
  id: 'extract-entities',
  model: 'gpt-4o',
  system: 'Extract named entities from the text.',
  output: z.object({
    people: z.array(z.string()),
    places: z.array(z.string()),
    organizations: z.array(z.string()),
  }),
});

If the model response does not match the schema, a validation error is thrown.

With Tools

import { z } from 'zod';
import { step } from '@noetic/core';

const searchTool = {
  name: 'search',
  description: 'Search the knowledge base',
  input: z.object({
    query: z.string(),
  }),
  output: z.object({
    results: z.array(z.string()),
  }),
  execute: async (args) => {
    return {
      results: ['Result 1', 'Result 2'],
    };
  },
};

const researcher = step.llm({
  id: 'research',
  model: 'gpt-4o',
  system: 'Answer the question using the search tool.',
  tools: [searchTool],
});

When the model emits a tool call, the runtime executes the tool and feeds the result back. To loop until the model stops calling tools, wrap the LLM step in a loop with until.noToolCalls().

Tuning Generation

import { step } from '@noetic/core';

const creative = step.llm({
  id: 'brainstorm',
  model: 'gpt-4o',
  system: 'Generate creative ideas.',
  params: {
    temperature: 0.9,
    maxTokens: 2e3,
  },
});

On this page