Build production-grade AI agents that work with any model. Multi-tenant, observable, durable, and enterprise-ready. Three lines to your first agent.
from sagewai.engines.universal import UniversalAgent
from sagewai.models.tool import tool
@tool
async def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Sunny, 22°C in {city}"
agent = UniversalAgent(
name="weather-bot",
model="gpt-4o",
tools=[get_weather],
)
response = await agent.chat("What's the weather in Berlin?")
print(response) # "It's sunny and 22°C in Berlin!"Everything you need to build, govern, and operate AI agents at scale.
Build agents in Python with multi-model support, custom tools, persistent memory, guardrails, and durable workflows. Three lines to your first agent, 100+ models via LiteLLM.
Store, version, discover, and govern AI agents across your organization. Agent lifecycle management with approval workflows and audit trails.
Proxy, route, and budget-control all LLM access. Point Claude Code, Cursor, or Codex at the harness for automatic cost optimization and policy enforcement.
Source of truth for all AI expenditure. Cost tracking per model, OpenTelemetry tracing, Prometheus metrics, audit logs, and compliance-ready reporting.
Fine-tune domain-specific LLMs with Unsloth, serve locally, route through the Harness at $0 per token. Build legal, medical, or finance models with your own data.
From simple single-agent tasks to complex multi-agent pipelines with safety guardrails and cost controls.
Compose agents into sequential, parallel, or loop patterns. Each agent can use a different model.
from sagewai.engines.universal import UniversalAgent
from sagewai.core.workflows import SequentialAgent, ParallelAgent
researcher = UniversalAgent(name="researcher", model="gpt-4o")
writer = UniversalAgent(name="writer", model="claude-3-5-sonnet-20241022")
reviewer = UniversalAgent(name="reviewer", model="gpt-4o-mini")
# Pipeline: research -> write -> review
pipeline = SequentialAgent(
name="article-pipeline",
agents=[researcher, writer, reviewer],
)
result = await pipeline.chat("Write about quantum computing")Protect inputs and outputs with PII detection, hallucination guards, content filters, and token budgets.
from sagewai.engines.universal import UniversalAgent
from sagewai.safety.pii import PIIGuard, PIIEntityType
from sagewai.safety.hallucination import HallucinationGuard
agent = UniversalAgent(
name="safe-agent",
model="gpt-4o",
guardrails=[
PIIGuard(action="redact", entity_types=[
PIIEntityType.EMAIL,
PIIEntityType.PHONE,
PIIEntityType.SSN,
]),
HallucinationGuard(threshold=0.3, action="warn"),
],
)
# PII is automatically redacted before reaching the LLM
# Hallucinations are flagged based on RAG context groundingPowered by LiteLLM. Write your agent once, then swap models with a single parameter. No code changes required.
Plus Azure OpenAI, AWS Bedrock, Vertex AI, Together AI, Groq, Fireworks, and many more via LiteLLM.
Use what you need. Every module is independently importable and composable.
Install the SDK and create your first agent in under a minute.
pip install sagewai