Light Dark

Functions

assistant-message

fn (text: Str?, tool-calls: Vec<::ai::tool/ToolCall>?): Message

Construct a Message representing an assistant turn. When tool-calls is non-null the message records a tool-use turn that must be paired with one or more tool-result-message entries on the next user turn.

check-budget

fn (opts: ChatOptions, messages: Vec, tools: Vec<::ai::tool/Tool>?): Null

Internal: enforce opts.max-context-tokens / opts.warn-context-pct against messages + system + tools.

  • If neither limit is set, returns silently.
  • If count-tokens-fn is set, uses it for messages; otherwise falls back to count-tokens-heuristic.
  • Logs a warning via tap when the projected total crosses the warn-context-pct threshold of the limit.
  • Fails (fail) when the projected total exceeds the limit outright. The failure carries a structured breakdown.

count-message-chars

fn (m: Message): Int

Internal: serialize a single Message to a char-count for the shared heuristic.

count-tokens-heuristic

fn (messages: Vec, model: Str): Int

Provider-agnostic char/4 token estimator. Use as a fallback for ChatOptions.count-tokens-fn when a provider lacks a precise counter (e.g. OpenAI without tiktoken). Tracks BPE tokenizers to within ~10% on English prose; code/JSON/non-Latin will skew higher.

Contract matches the count-tokens-fn slot:

(messages: Vec<Message>, model: Str): Int

detect-provider

fn (model: Str): Provider

Map a model name to its Provider. Matches on known prefixes (claude, gpt, o1/o3/o4, grok, gemini). Returns Err for unrecognized models.

Example

detect-provider("claude-sonnet-4-5")  // Provider.Anthropic
detect-provider("gpt-4o")             // Provider.OpenAi

emit-step

fn (opts: ChatOptions, iteration: Int, reply: ChatReply, tool-results: Vec?): Null

Internal: publish a per-turn step record to the current run's stream when opts.emit-steps is on. Errors (e.g. no stream context) are swallowed so callers can opt in unconditionally.

emit-stream-delta

fn (opts: ChatOptions, delta: ReplyDelta): Null

Internal: forward a ReplyDelta to the current run's stream when opts.emit-stream-deltas is on. Errors (e.g. no stream context) are swallowed.

estimate-system-tokens

fn (system: Str?): Int

Internal: char/4 heuristic for the system prompt.

estimate-tools-tokens

fn (tools: Vec<::ai::tool/Tool>?): Int

Internal: char-based heuristic for the JSON-serialized weight of a tool list. Adds ~10 tokens of overhead per tool to account for schema formatting in the wire request.

format-budget-breakdown

fn (parts: Map, total: Int, limit: Int): Str

Internal: human-readable breakdown of a token budget overrun.

noop-on-delta

fn (delta: ReplyDelta): Null

Default on-delta callback used when a streaming caller does not supply one. Discards every event.

ns alias

Alias of ::ai::chat/

Provider-agnostic chat surface used by every ::ai::* consumer.

This module defines:

  • Provider enum and a model-name → provider mapping for routing.
  • Message and ChatReply types that normalize provider responses so downstream code never branches on the underlying SDK.
  • ChatOptions carrying the chat function reference, model name, optional system prompt, and (for tool-using flows) tools plus a max-iterations cap.
  • run-loop, the canonical agent loop that calls chat-fn, dispatches tool_use blocks via ::ai::tool/dispatch, and iterates until the model finishes or the iteration cap is hit.

Provider packages (e.g. ::anthropic::messages, ::openai::chat) each expose a chat-with-tools function matching the new chat-fn contract:

chat-fn(model: Str, messages: Vec<Message>, system: Str?, tools: Vec<Tool>?): ChatReply

The legacy (model, prompt, system) -> Str shape used by ::ai::rag is still supported as-is — only run-loop callers must adopt the new shape.

provider-name

fn (provider: Provider): Str

Human-readable name for a Provider. Falls back to the variant's short name for any provider enrolled by a third-party package via Source -> Provider.Variant arrows. The _ default arm is required because Provider is an open enum.

run-loop

fn (opts: ChatOptions, user-msg: Str): Str

Drive a chat conversation through any number of tool-use turns, stopping when the model says end_turn or when max-iterations is reached.

opts.chat-fn must match the tools-aware contract:

(model: Str, messages: Vec<Message>, system: Str?, tools: Vec<Tool>?) -> ChatReply

On each iteration:

  1. Call chat-fn with the running message list.
  2. If the reply has no tool calls, return its text.
  3. Otherwise, dispatch every tool call via ::ai::tool/dispatch, append the assistant turn and one tool-result turn per call, and loop.

Errors from individual tool calls are surfaced back to the model as tool-result-message entries with the failure text — the model is allowed to recover. Hitting max-iterations raises a fail.

Example

add fn (x: Int, y: Int): Int { add(x, y) }

opts ::ai::chat/ChatOptions({
    chat-fn: ::anthropic::messages/chat-with-tools,
    model: "claude-sonnet-4-5",
    tools: [::ai::tool/from-fn(add)],
    max-iterations: 5
})

answer ::ai::chat/run-loop(opts, "What is 17 + 25?")
// answer = "17 + 25 is 42."

run-loop-messages

fn (opts: ChatOptions, messages: Vec): Str

Variant of run-loop that takes an explicit Vec<Message> as starting history. Useful for resuming an existing conversation.

Example

history [
    ::ai::chat/user-message("Hi"),
    ::ai::chat/assistant-message("Hello!", null),
    ::ai::chat/user-message("How are you?")
]
::ai::chat/run-loop-messages(opts, history)

run-loop-step

fn (opts: ChatOptions, messages: Vec, tools: Vec<::ai::tool/Tool>, remaining: Int): Str

Internal: one iteration of run-loop. Tail-recursive.

run-loop-stream

fn (opts: ChatOptions, user-msg: Str): Str
fn (opts: ChatOptions, user-msg: Str, on-delta: Fn?): Str

Streaming counterpart to run-loop. Drives a tools-aware conversation through any number of turns, invoking on-delta(delta: ReplyDelta) for every event the underlying chat-stream-fn emits across all turns.

opts.chat-stream-fn must match the streaming contract:

(model: Str, messages: Vec<Message>, system: Str?,
 tools: Vec<Tool>?, on-delta: Fn) -> ChatReply

On each iteration the function:

  1. Calls chat-stream-fn with the running message list and a merged on-delta that also forwards to ::hot::stream/data when opts.emit-stream-deltas is on.
  2. Collects the final ChatReply. If no tool calls, returns the accumulated text.
  3. Otherwise dispatches every tool call via ::ai::tool/dispatch, appends the assistant turn + tool-result turns, and loops.

Errors and max-iterations behavior match run-loop.

Example

on-token (delta: ::ai::chat/ReplyDelta) {
    match delta {
        ::ai::chat/ReplyDelta.TextDelta => { print(delta.text) }
        => { null }
    }
}

opts ::ai::chat/ChatOptions({
    chat-stream-fn: ::anthropic::chat-tools/chat-with-tools-stream,
    model: "claude-sonnet-4-5",
    tools: [::ai::tool/from-fn(add)]
})

answer ::ai::chat/run-loop-stream(opts, "What is 17 + 25?", on-token)

run-loop-stream-messages

fn (opts: ChatOptions, messages: Vec, on-delta: Fn?): Str

Variant of run-loop-stream that takes an explicit Vec<Message> as starting history. Useful for resuming an existing conversation while still streaming the next response.

run-loop-stream-step

fn (opts: ChatOptions, messages: Vec, tools: Vec<::ai::tool/Tool>, remaining: Int, on-delta: Fn): Str

Internal: one iteration of run-loop-stream. Tail-recursive.

tool-result-message

fn (result: ::ai::tool/ToolResult): Message

Construct a Message carrying a ::ai::tool/dispatch result back to the model. Echoes result.id as tool-call-id so providers can correlate it with the originating ToolCall.

user-message

fn (text: Str): Message

Construct a Message with role: "user" and the given text content.

wrap-on-delta

fn (opts: ChatOptions, user-on-delta: Fn?): Fn

Internal: build the effective per-turn on-delta callback — combines an optional caller-supplied user-on-delta with the stream-emit forwarder controlled by opts.emit-stream-deltas.

Types

ChatOptions

ChatOptions type {
    chat-fn: Fn?,
    chat-stream-fn: Fn?,
    model: Str,
    system: Str?,
    tools: Vec<::ai::tool/Tool>?,
    skills: Vec?,
    skill-resolver: ::ai::skill/SkillResolver?,
    max-iterations: Int?,
    max-context-tokens: Int?,
    max-output-tokens: Int?,
    warn-context-pct: Dec?,
    emit-steps: Bool?,
    step-data-type: Str?,
    emit-stream-deltas: Bool?,
    delta-data-type: Str?,
    count-tokens-fn: Fn?,
    model-context-window: Int?
}

Options bundle for chat-consuming functions.

Required

  • chat-fn — function reference used to call the underlying provider. Two contracts are recognized:

    • Legacy: (model: Str, prompt: Str, system: Str?) -> Str — used by ::ai::rag and other single-shot helpers.
    • Tools-aware: (model: Str, messages: Vec<Message>, system: Str?, tools: Vec<Tool>?) -> ChatReply — required by run-loop.
  • model — model identifier passed through to the provider.

  • chat-stream-fn — streaming counterpart used by run-loop-stream. Contract:

    (model: Str, messages: Vec<Message>, system: Str?,
     tools: Vec<Tool>?, on-delta: Fn) -> ChatReply
    

    The function invokes on-delta(delta: ReplyDelta) for each event in turn order and returns the same ChatReply shape as the non-streaming version.

Optional

  • system — system prompt prepended to every call.
  • tools — tools the model is allowed to call from run-loop.
  • skills — vector of skill-meta'd functions to advertise to the model. run-loop wraps these in ::ai::skill/in-memory-resolver and exposes the list_skills/read_skill/apply_skill built-in tools automatically. Ignored when skill-resolver is set.
  • skill-resolver — explicit ::ai::skill/SkillResolver. Takes precedence over skills and lets callers plug in store-backed, embedding-ranked, or otherwise dynamic skill discovery while reusing the run-loop's index + built-in-tools machinery.
  • max-iterations — hard cap on run-loop turns (default 10).
  • max-context-tokens, max-output-tokens, warn-context-pct — reserved for Phase 1.5 token-budget enforcement; ignored today.
  • emit-steps — when true, run-loop publishes a per-turn observability record to the current run's stream via ::hot::stream/data. Defaults to false. Errors from missing stream context are swallowed so opting in is safe in any execution environment.
  • step-data-type — stream data-type label used when emit-steps is on. Defaults to "ai:chat:step".
  • emit-stream-deltas — when true, run-loop-stream mirrors every ReplyDelta to the current run's stream via ::hot::stream/data (in addition to invoking any caller on-delta). Defaults to false. Errors from missing stream context are swallowed.
  • delta-data-type — stream data-type label used when emit-stream-deltas is on. Defaults to "ai:chat:delta".
  • count-tokens-fn — provider's (messages, model) -> Int counter used by pre-call budget checks. When unset, falls back to count-tokens-heuristic.
  • model-context-window — explicit context-window size in tokens. When unset, the budget check uses max-context-tokens directly without computing a percentage.

Example

::tool ::ai::tool
::anth ::anthropic::messages

weather-tool ::tool/from-fn(get-weather)

opts ChatOptions({
    chat-fn: ::anth/chat-with-tools,
    model: "claude-sonnet-4-5",
    system: "You are a concise assistant.",
    tools: [weather-tool],
    max-iterations: 5
})

ChatReply

ChatReply type {
    text: Str?,
    tool-calls: Vec<::ai::tool/ToolCall>?,
    stop-reason: Str,
    usage: Map?,
    raw: Any?
}

Normalized provider response. Returned by the new chat-fn contract and consumed by run-loop.

Fields

  • text — the assistant's text output for this turn (may be empty when the turn is purely tool-use).
  • tool-calls — parsed ToolCall records when the model requested tool use.
  • stop-reason"end_turn", "tool_use", "max_tokens", "stop_sequence", or any other provider-specific reason.
  • usage — token accounting {input-tokens, output-tokens} when the provider reports it (Phase 1.5 populates this universally).
  • raw — the raw provider response for adapter use; opaque to run-loop callers.

Message

Message type {
    role: Str,
    content: Any,
    tool-calls: Vec<::ai::tool/ToolCall>?,
    tool-call-id: Str?
}

One turn of a chat conversation in normalized form. Provider adapters translate this into their native message shape.

Fields

  • role"user", "assistant", "system", or "tool".
  • content — usually a Str (plain text). For role: "tool" it is the result returned by ::ai::tool/dispatch. For role: "assistant" turns that called tools it may be empty.
  • tool-calls — present on role: "assistant" turns that requested tool use; carries the parsed ToolCall records.
  • tool-call-id — present on role: "tool" results, echoing the id of the originating ToolCall.

Example

user-msg Message({role: "user", content: "What's the weather in Paris?"})

assistant-call Message({
    role: "assistant",
    content: "",
    tool-calls: [::tool/ToolCall({id: "tu_1", name: "get-weather", input: {city: "Paris"}})]
})

tool-result Message({
    role: "tool",
    content: "18C and clear",
    tool-call-id: "tu_1"
})

Provider

Known AI chat completion providers. Declared enum open so third-party adapters can register their own provider identity via arrow enrollment without forking this enum:

Mistral type { name: Str }
Mistral -> Provider.Mistral

Match expressions on Provider MUST include a _ default arm (open-enum-match-missing-default otherwise).

ReplyDelta

Provider-agnostic streaming event emitted by a chat-stream-fn. Each event is one of:

  • TextDelta — a text fragment (concatenate to rebuild the turn).
  • ToolUseStart — a new tool call has begun (id, name).
  • ToolUseInputDelta — partial JSON of a tool call's input.
  • ToolUseEnd — a tool call has finished streaming.
  • Stop — terminal event with stop-reason and optional usage.

Provider adapters translate native SSE/wire events into this enum so consumers (and run-loop-stream) never branch on the underlying SDK.

Declared enum open so provider adapters can introduce provider-specific delta kinds (e.g., ThinkingDelta, ReasoningDelta) via arrow enrollment without forking this type. Match expressions on ReplyDelta MUST include a _ default arm (open-enum-match-missing-default otherwise) — unknown deltas should be silently ignored or forwarded.

StreamStop

StreamStop type {
    reason: Str,
    usage: Map?
}

Final event of a streaming turn. reason mirrors ChatReply.stop-reason ("end_turn", "tool_use", "max_tokens", …). usage carries the normalized {input-tokens, output-tokens} map when the provider reports it on this terminal event.

TextDelta

TextDelta type {
    text: Str
}

Incremental text fragment from a streaming reply. text is the delta only — concatenate them in order to reconstruct the full assistant text for the turn.

ToolUseEnd

ToolUseEnd type {
    id: Str
}

A tool_use block has finished streaming. The accumulated input JSON is now ready to parse and dispatch.

ToolUseInputDelta

ToolUseInputDelta type {
    id: Str,
    partial-input-json: Str
}

Partial JSON fragment of a tool call's input. Concatenate the partial-input-json strings for a given id, then from-json the result to reconstruct the call's input map.

ToolUseStart

ToolUseStart type {
    id: Str,
    name: Str
}

A new tool_use block has begun in a streaming reply. The model has chosen name for tool-call id; the input arguments stream as ToolUseInputDelta events until a matching ToolUseEnd.