Foundation — the SDK basics, ten short examples

This is Tier 2 of the example catalogue — the bare-minimum demonstration of each major SDK capability. Each example is 60-150 lines, no sibling README, single-purpose. The goal of each is "here's the cheapest demo that proves capability X works."

If you've already run Example 01 — hello agent, the foundation examples are your next stop. Read them in dependency order: tools → memory → workflows → safety → directives → routing → app structure.

You don't have to read all ten. Pick the ones that match what you're shipping. Each is independent.


The ten foundation examples

Tool calling

Example 02 — tool_agent

Agent with tool-calling. Defines a tool, registers it with the agent, watches the agent call it. The cheapest demonstration that "I can give an agent tools without scaffolding."

When to read: after hello-world; before any production use.

Multi-model swap

Example 03 — multi_model

Same agent code, swap Anthropic ↔ OpenAI ↔ local. Demonstrates the SDK's LLM-agnostic surface end-to-end.

When to read: when you're choosing your initial LLM provider, or when the audience-pin person asks "does this lock me into Anthropic?"

Memory basics

Example 04 — memory_agent

Agent with memory. Basic vector memory, store-and-retrieve. Bridge to the deeper memory examples.

When to read: when you need conversation context retention across turns.

Multi-step workflows

Example 05 — workflow

A two-step workflow that chains agent calls. Plan → execute. The smallest workflow surface.

When to read: when single-turn agents stop being enough.

Safety filters

Example 06 — guardrails

A guardrail that intercepts unsafe outputs and rejects them. The cheapest demo of the safety surface.

When to read: before you expose an agent to users.

MCP integration

Example 07 — mcp_tools

An agent calling tools served by an MCP (Anthropic's open standard) server. Demonstrates Sagewai speaks MCP — third-party tool servers work out of the box.

When to read: when integrating third-party MCP-compliant tool servers.

Directives — the harness any LLM moat

Example 08 — directives

The directive library is what lets Sagewai harness even cheap LLMs into useful agents. @datetime, @context, @memory, /tool.name directives expand at prompt-resolution time.

When to read: when you want the same agent code to work on Opus and on local llama3 with similar quality.

Model routing

Example 13 — model_routing

Routing rules — pick a different model per task class (cheap for simple, frontier for complex). The cost-down primitive.

When to read: when you want a tier router in front of your agent.

Local LLM routing

Example 18 — local_llm_routing

Same code, Ollama and LM Studio. The LLM-agnostic claim depends on this running cleanly on Ollama.

When to read: when you're going off paid LLM providers.

App-factory pattern

Example 27 — app_factory

Production project structure: factory function, dependency injection, structured config. The shape your real app should grow into.

When to read: when your example outgrew a single file.


  • Lighthouse — once you've covered the foundation you need, the lighthouse pages show the production-grade compositions.
  • Patterns — Tier 4 reference examples, longer than foundation but shorter than lighthouse.
  • Pillars — capability deep-dives. Each pillar links back to its foundation and lighthouse examples.
  • Reference — examples — the full numbered list, in case you want to find something by file number.