Foundation — the SDK basics, ten short examples
This is Tier 2 of the example catalogue — the bare-minimum demonstration of each major SDK capability. Each example is 60-150 lines, no sibling README, single-purpose. The goal of each is "here's the cheapest demo that proves capability X works."
If you've already run Example 01 — hello agent, the foundation examples are your next stop. Read them in dependency order: tools → memory → workflows → safety → directives → routing → app structure.
You don't have to read all ten. Pick the ones that match what you're shipping. Each is independent.
The ten foundation examples
Tool calling
Agent with tool-calling. Defines a tool, registers it with the agent, watches the agent call it. The cheapest demonstration that "I can give an agent tools without scaffolding."
When to read: after hello-world; before any production use.
Multi-model swap
Same agent code, swap Anthropic ↔ OpenAI ↔ local. Demonstrates the SDK's LLM-agnostic surface end-to-end.
When to read: when you're choosing your initial LLM provider, or when the audience-pin person asks "does this lock me into Anthropic?"
Memory basics
Agent with memory. Basic vector memory, store-and-retrieve. Bridge to the deeper memory examples.
When to read: when you need conversation context retention across turns.
Multi-step workflows
A two-step workflow that chains agent calls. Plan → execute. The smallest workflow surface.
When to read: when single-turn agents stop being enough.
Safety filters
A guardrail that intercepts unsafe outputs and rejects them. The cheapest demo of the safety surface.
When to read: before you expose an agent to users.
MCP integration
An agent calling tools served by an MCP (Anthropic's open standard) server. Demonstrates Sagewai speaks MCP — third-party tool servers work out of the box.
When to read: when integrating third-party MCP-compliant tool servers.
Directives — the harness any LLM moat
The directive library is what lets Sagewai harness even cheap LLMs into useful agents. @datetime, @context, @memory, /tool.name directives expand at prompt-resolution time.
When to read: when you want the same agent code to work on Opus and on local llama3 with similar quality.
Model routing
Routing rules — pick a different model per task class (cheap for simple, frontier for complex). The cost-down primitive.
When to read: when you want a tier router in front of your agent.
Local LLM routing
Example 18 — local_llm_routing
Same code, Ollama and LM Studio. The LLM-agnostic claim depends on this running cleanly on Ollama.
When to read: when you're going off paid LLM providers.
App-factory pattern
Production project structure: factory function, dependency injection, structured config. The shape your real app should grow into.
When to read: when your example outgrew a single file.
What to read next
- Lighthouse — once you've covered the foundation you need, the lighthouse pages show the production-grade compositions.
- Patterns — Tier 4 reference examples, longer than foundation but shorter than lighthouse.
- Pillars — capability deep-dives. Each pillar links back to its foundation and lighthouse examples.
- Reference — examples — the full numbered list, in case you want to find something by file number.