Lighthouse — production-grade examples that prove the claims

This is the primary discovery surface for Sagewai. Six use-case pages, each backed by one or more shipped examples that walk through a real-world problem the audience-pin person — a senior engineer at a 50-500-person SaaS who got the "add AI to the product this quarter" email — actually has.

Why this section exists: the SDK has 47 examples. The first ~30 are entry-level demonstrations of individual features. The last 15 are lighthouses — production-grade walkthroughs with mermaid architecture diagrams, real numbers, and a CFO-readable cost story. Listing everything chronologically buries the lighthouse work behind 32 thin examples; this section flips the discovery order.

Each lighthouse page below has a runnable proof you can clone tonight and a Real-world use cases section that names the persona, the industry, and the concrete win.


The six lighthouse use cases

Train your own model

"Anthropic's pricing changed. Here's how we never had to flinch."

Capture training data from Opus, fine-tune a 3B model on a free Colab T4 or a $0.34/hr RunPod A10G, deploy it via Ollama, and serve real traffic at zero per-token cost. The full Q3-cost-down arc from the audience pin, end-to-end in a weekend for under $5.

Examples: 38, 38a, 44, 45, 47, 48, 36, 25 · Pillar: Training Loop

Moderation and classification

"Three HuggingFace classifiers carry the deterministic half. A cheap LLM judges the boundary cases. ML and LLM are both first-class."

Community moderation, support-ticket triage, sales-lead qualification — pattern: classifier ensemble inside a sealed sandbox, surfaced as MCP tools, judged by a Haiku-class LLM. Full audit trail per decision.

Examples: 49, 42 · Pillar: SDK

Memory and retrieval

"Weak LLMs feel like Opus when you only feed them the relevant slice of history."

Two patterns: semantic checkpoint recall (vague references like "back to that earlier point" resolve into the right slice of prior context, even on a 7B Ollama model) and graph beats vector (multi-hop reasoning across incident dependencies, where vector chunking loses).

Examples: 37, 41, 04, 29, 32 · Pillar: SDK

Production multitenancy

"Healthcare worker can't claim finance tasks. Sandbox credentials never touch the worker host."

The full Sealed-spine story: per-CLI workload identity, scoped credentials provided to a containerised agent, cross-tenant isolation enforced by the Fleet dispatcher, audit trail across the boundary. The example a security reviewer reads.

Examples: 33, 39, 16 · Pillar: Fleet + Sealed spine

Observability and cost

"AI infrastructure spending without a dashboard is finance malpractice. Here's what good looks like."

Real telemetry into Grafana, Iron Man HUD with live agents on the graph, per-tenant cost tracking, OpenTelemetry pipeline. Five rows, fourteen panels, all from a real run of the example — no canned data.

Examples: 34, 43, 40, 12 · Pillar: Observatory

Inference deployment

"Bring your own GPU, your own endpoint, your own model. We provide the wiring."

The five inference tiers (free Colab T4, Vast.ai bid market, RunPod reliable rental, Modal serverless, bring-your-own endpoint) plus the SLM-deployment story (mlx-lm on Apple Silicon, Ollama everywhere else). When to pick which tier and how each plugs into Sagewai.

Examples: 44, 45, 46, 47, 48, 38, 38a, 18 · Pillar: Training Loop


How the lighthouse pages are structured

Each page below follows the same shape so you can scan the one closest to your job:

  1. What this proves — one paragraph, no hype, the invariants the example demonstrates.
  2. Architecture — a mermaid flowchart of the moving parts.
  3. Run it — clean-machine 60-second path and full live path.
  4. Real-world use cases — three to five concrete persona + industry callouts. The bar is "I recognise one of these as my own job."
  5. Companion examples — links to the shipped examples that back the page, with sibling READMEs on PyPI and GitHub.
  6. What to read next — sibling lighthouses, prerequisite foundation, primary pillar.

If you'd rather see the SDK basics before reading lighthouse work, Foundation is the right starting point. If you came here for production patterns shorter than a lighthouse but longer than a foundation example, Patterns is the middle layer.