Autopilot
Sagewai Autopilot lets you state a goal in plain English and have the platform design, provision, run, and continuously improve the agents that deliver it. Instead of writing agent code, you describe the outcome you want and Autopilot handles the rest — routing your goal to the best matching blueprint, extracting slot values, scheduling a mission, and driving execution through the agent graph.
Autopilot is built for production operations teams who want durable, observable, self-improving agent workflows without the overhead of managing prompt engineering or model selection. The hosted blueprint service learns from every mission run, so routing accuracy improves over time and you can export training data for private fine-tuning at any point.
Architecture
User goal (plain text)
│
▼
┌─────────────┐ retrieve_blueprints() ┌──────────────────────┐
│ GoalRouter │ ────────────────────────► │ SagewaiLLMClient │
│ │ ◄──── ranked candidates ─ │ (hosted service) │
└──────┬──────┘ └──────────────────────┘
│ ConfidenceConfig gates the result
│
┌─────┴──────────────┐
│ │ │
AutoRouted PickerNeeded SynthesisNeeded
│ │ │
│ User picks one Service generates new blueprint
│ │
▼ ▼
AutopilotController (approve → schedule)
│
▼
MissionDriver → AgentGraph → AgentExecutor (LiteLLM)
│
▼
Curator → TrainingDataset → FineTuneJob
Quick start
1. Enable Autopilot
import httpx
httpx.post("http://localhost:8765/api/v1/autopilot/enable", json={"tier": "anonymous"})
Or via CLI:
sagewai autopilot enable --tier anonymous
2. Submit a goal
resp = httpx.post(
"http://localhost:8765/api/v1/autopilot/goal",
json={"goal": "run daily competitive research on 3 vendors"},
headers={"X-Project-ID": "my-project"},
)
data = resp.json()
print(data["kind"]) # "auto_routed" | "picker_needed" | "synthesis_needed"
print(data["preview"])
Or via CLI:
sagewai autopilot goal "run daily competitive research on 3 vendors" \
--project my-project
3. Approve and monitor
Once you receive an auto_routed result, approve the mission:
httpx.post(
"http://localhost:8765/api/v1/autopilot/missions/{mission_id}/approve",
headers={"X-Project-ID": "my-project"},
)
Then monitor progress:
sagewai autopilot missions --project my-project
API reference
All routes are under /api/v1/autopilot and require the sagewai_auth cookie (or a Bearer token). Use the X-Project-ID header for project-scoped isolation.
GET /api/v1/autopilot/status
Return the current autopilot configuration.
Response
{
"enabled": true,
"tier": "anonymous",
"instance_id": "a3f1..."
}
POST /api/v1/autopilot/enable
Enable autopilot for the current instance.
Request body
{ "tier": "anonymous" }
tier values: anonymous, free, skip (skip = bypass service, use synthesis only).
Response: 200 OK with {"ok": true}.
POST /api/v1/autopilot/disable
Disable autopilot. Running missions are not affected.
Response: 200 OK with {"ok": true}.
POST /api/v1/autopilot/goal
Route a plain-English goal to a blueprint.
Request body
{ "goal": "run daily competitive research on 3 vendors" }
Response — auto_routed
{
"kind": "auto_routed",
"mission_id": "ms-abc123",
"blueprint_id": "SYNTHETIC_scheduled_research",
"preview": "Schedule: 0 9 * * 1-5\nVendors: 3 URLs\n...",
"slots": { "vendors": [], "schedule": "0 9 * * 1-5" }
}
Response — picker_needed
{
"kind": "picker_needed",
"candidates": [
{ "blueprint_json": "{...}", "score": 0.72 },
{ "blueprint_json": "{...}", "score": 0.68 }
]
}
Response — synthesis_needed
{
"kind": "synthesis_needed",
"goal": "run daily competitive research on 3 vendors"
}
GET /api/v1/autopilot/missions
List all missions for the current project.
Response
[
{
"mission_id": "ms-abc123",
"blueprint_id": "scheduled_research",
"status": "scheduled",
"project_id": "my-project",
"created_at": "2026-04-15T09:00:00Z"
}
]
POST /api/v1/autopilot/missions/{mission_id}/approve
Approve a draft mission and advance it to SCHEDULED.
Response: 200 OK with the updated mission object.
DELETE /api/v1/autopilot/missions/{mission_id}
Cancel a mission. Has no effect if the mission is already COMPLETED or FAILED.
Response: 200 OK with {"cancelled": true}.
CLI commands
# Show autopilot status
sagewai autopilot status [--host localhost] [--port 8765] [--token TOKEN]
# Enable autopilot
sagewai autopilot enable [--tier anonymous] [--host localhost] [--port 8765]
# Disable autopilot
sagewai autopilot disable [--host localhost] [--port 8765]
# Route a goal and see the result
sagewai autopilot goal "your goal text" [--project PROJECT_ID]
# List active missions
sagewai autopilot missions [--project PROJECT_ID]
Configuration
Autopilot behaviour can be tuned via environment variables without changing code.
| Variable | Default | Description |
|---|---|---|
AUTOPILOT_AUTO_ROUTE_THRESHOLD | 0.85 | Minimum score for automatic blueprint selection. |
AUTOPILOT_PICKER_THRESHOLD | 0.65 | Minimum score to show the user a picker. |
AUTOPILOT_CACHE_TTL | 3600 | Blueprint cache TTL in seconds. |
SAGEWAI_LLM_BASE_URL | https://api.sagewai.ai | Base URL for the hosted blueprint service. |
Example — lower the auto-route threshold in a test environment:
export AUTOPILOT_AUTO_ROUTE_THRESHOLD=0.70
export AUTOPILOT_PICKER_THRESHOLD=0.50
sagewai autopilot goal "..."
Routing autopilot through the LLM Harness
By default, AgentExecutor calls litellm.acompletion directly — convenient for development but bypassing the LLM Harness's budget enforcement, classification, routing, policy, audit, and cost tracking.
To route autopilot through the harness, construct a HarnessProxy and pass it to ExecutorConfig:
from sagewai.autopilot.controller.executor import ExecutorConfig
from sagewai.autopilot.controller.driver import MissionDriver
from sagewai.harness.proxy import HarnessProxy
from sagewai.harness.models import HarnessIdentity, HarnessConfig
from sagewai.harness.router import HarnessRouter
from sagewai.harness.store import InMemoryHarnessStore
from sagewai.harness.backend import AnthropicBackend
store = InMemoryHarnessStore()
router = HarnessRouter(...) # configure with your tier/policy/budget setup
proxy = HarnessProxy(
store=store,
router=router,
backends={"anthropic": AnthropicBackend(api_key="...")},
config=HarnessConfig(),
)
identity = HarnessIdentity(key_id="autopilot-default", user_id="autopilot")
executor_cfg = ExecutorConfig(
harness_proxy=proxy,
harness_identity=identity,
)
driver = MissionDriver(executor_config=executor_cfg)
result = await driver.execute(mission)
When the harness path is taken, every StepResult carries:
output— full LLM response content (not the 200-char preview)messages— the full system + user + assistant conversation tupletelemetry—StepTelemetrywithcost_usd,input_tokens,output_tokens,model_used,latency_ms
Curator uses step.output directly when available, producing real training samples instead of preview-derived ones. The telemetry block is what the v1.1 cost-down second-run claim measures against.
Need more?
Contact licensing@sagewai.ai for custom rates.