Directives

The Directive Engine preprocesses prompts before the LLM call, resolving @context, @memory, @agent, /tool, /mcp, and #meta directives into enriched context. This enables small and local models to leverage the full Sagewai infrastructure without native tool-calling support.

from sagewai import DirectiveEngine

engine = DirectiveEngine(
    context=my_context_engine,
    model="codellama:7b",
)
result = await engine.resolve("@context('ML basics') Help me learn")
# result.prompt contains enriched text with context injected

DirectiveEngine

Main orchestrator for prompt preprocessing. Supports two syntax modes:

  • Sigil mode (resolve): @context('q'), /tool.name('a'), #model:x
  • Template mode (resolve_template): {{ context.search('q') }}

Both modes resolve through the same pipeline: parse, resolve, format, compress, and assemble.

from sagewai import DirectiveEngine

engine = DirectiveEngine(
    context=context_engine,
    memory=memory_provider,
    tools={"search": search_tool},
    agents={"researcher": researcher_agent},
    model="gpt-4o",
    resolution_timeout=10.0,
)
result = await engine.resolve(
    "@context('machine learning') @memory('previous findings') Summarize"
)

Constructor

ParameterTypeDefaultDescription
contextAny | NoneNoneContext Engine for @context directives
memoryAny | NoneNoneMemory provider for @memory directives
toolsdict[str, Any] | NoneNoneTools for /tool directives
agentsdict[str, Any] | NoneNoneAgents for @agent:name() directives
mcp_clientsdict[str, Any] | NoneNoneMCP clients for /mcp directives
modelstr | NoneNoneModel name for auto-detection of profile
model_profileModelProfile | NoneNoneExplicit model profile (overrides auto-detection)
max_context_tokensint | NoneNoneToken budget override
registryDirectiveRegistry | NoneNoneCustom directive registry
allowed_toolsset[str] | NoneNoneTool allowlist (security)
allowed_mcpset[str] | NoneNoneMCP allowlist (security)
allow_all_toolsboolFalseBypass tool allowlists
resolution_timeoutfloat10.0Max seconds per directive resolution
max_agent_depthint3Max recursive agent delegation depth

Methods

MethodSignatureReturnsDescription
resolveasync resolve(prompt: str)DirectiveResultResolve sigil-syntax directives
resolve_templateasync resolve_template(prompt: str)DirectiveResultResolve template-syntax directives

DirectiveResult

Result of directive resolution. Contains the enriched prompt and metadata.

result = await engine.resolve("@context('topic') My question")

print(result.prompt)           # Enriched prompt with context injected
print(result.clean_prompt)     # Original text with directives stripped
print(result.context_blocks)   # Resolved context blocks
print(result.metadata)         # Token counts, timings, stats
print(result.tool_descriptions)  # Tool descriptions for prompt-based calling

Fields

FieldTypeDescription
promptstrFinal enriched prompt text with all context injected
clean_promptstrOriginal text with directives stripped
context_blockslist[ContextBlock]Resolved context blocks for system message injection
metadataDirectiveMetadataToken counts, timings, resolution stats
directives_foundlist[ResolvedDirective]All parsed directives with results
overridesExecutionOverrides | NoneExecution overrides from # meta-directives
tool_descriptionsstrFormatted tool descriptions for prompt-based tool calling

ModelProfile

Defines how the Directive Engine formats output for a model class. Controls compression aggressiveness, delimiter style, tool-call mode, and token budget allocation.

from sagewai import ModelProfile

profile = ModelProfile(
    name="custom",
    max_context_tokens=8192,
    compression_ratio=2.0,
    tool_call_mode="prompt_based",
)

Fields

FieldTypeDefaultDescription
namestrrequiredProfile identifier
max_context_tokensint4096Token budget for directive results
compression_ratiofloat1.0Target compression (1.0 = none, 5.0 = aggressive)
max_few_shotint3Max few-shot examples to inject
use_delimitersboolFalseWrap content in [CONTEXT]/[SOURCE] delimiters
use_explicit_instructionsboolFalseAdd explicit framing for small models
tool_call_modestr"native""native" or "prompt_based"
context_budgetdict[str, float]{...}Token budget allocation per category
default_top_kint5Default top_k for context retrieval

Built-in Profiles

ProfileContext TokensCompressionTool ModeUse Case
SMALL20485.0prompt_basedLocal models under 13B params
MEDIUM81922.0nativeMid-range models (13B-70B, GPT-4o-mini)
LARGE327681.0nativeFrontier models (GPT-4o, Claude, Gemini Pro)

detect_profile

Auto-detect the model profile from a model name string. Matches against known patterns and falls back to MEDIUM for unknown models.

from sagewai import detect_profile

profile = detect_profile("codellama:7b-instruct")  # -> SMALL
profile = detect_profile("gpt-4o")                  # -> LARGE
profile = detect_profile("ollama/mistral")           # -> SMALL
profile = detect_profile("unknown-model")            # -> MEDIUM

Signature

detect_profile(model: str) -> ModelProfile

Directive Syntax Reference

DirectiveSyntaxDescription
Context@context('query')Retrieve context by query
Context (scoped)@context('query', scope='org', tags='finance,q4')Scoped retrieval with tag filtering
Memory@memory('query')Search memory store
Agent@agent:name('task')Delegate to another agent (colon syntax)
Workflow@wf:name('input')Invoke a saved workflow
Tool/tool.name('args')Invoke a tool
MCP/mcp.server.tool('args')Invoke an MCP tool
Model#model:nameOverride model for this call
Budget#budget:amountSet cost budget
Dynamic@datetime, @date, @timeCurrent date/time values
Template{{ context.search('q') }}Template syntax alternative