Mantrix Agents¶
Multi-agent AI framework with advanced context sharing, graph memory, and typed handoffs.
24+ Context Strategies¶
The most comprehensive context management in any agent framework. Basic, memory, filtering, optimization, intelligence, and meta-strategies.
18+ Handoff Strategies¶
Intelligent routing, workflow patterns, context management, and security strategies for safe agent-to-agent communication.
Four-Network Memory¶
Cognitive-science-inspired graph memory with Facts, Experiences, Summaries, and Beliefs networks.
Safety & Compliance¶
Policy engine, supervisor process, and human escalation with built-in presets for Strict, GDPR, HIPAA, and Financial compliance.
Multi-SDK Support¶
Python-first with TypeScript bindings, Go FFI, and an optional high-performance Rust core providing 2-10x speedups.
Full Observability¶
OpenTelemetry tracing, step-by-step execution history, event bus, and real-time audit streaming via WebSocket.
MCP Protocol¶
Client/server support for the Model Context Protocol. Connect to MCP tool servers or expose agents as MCP tools.
A2A Protocol¶
Agent-to-Agent protocol for cross-system agent communication via AgentCards and task exchange.
Reinforcement Learning¶
3-tier RL system — tabular Q-learning, trajectory/experience replay, optional neural policies (DQN, A2C).
Workflow Primitives¶
Composable agent patterns — LoopAgent, ConditionalAgent, PipelineAgent with AgentLike protocol.
Evals Framework¶
Built-in evaluation suite with metrics, concurrent runner, and fine-tuning export (DPO/SFT/RLHF).
Durable Execution¶
Temporal integration for crash-resilient agent workflows with automatic checkpointing.
Quick Example¶
import asyncio
from orynx import Agent, AgentIdentity, Context
from orynx.providers import LLMAdapter
async def main():
agent = Agent(
identity=AgentIdentity(
name="Assistant",
role="helpful assistant",
goal="Help users with their questions",
),
provider=LLMAdapter(
model="claude-sonnet-4-20250514",
provider="anthropic",
),
)
result = await agent.run(
Context.create(),
"What is the capital of France?"
)
print(result.final_message.content)
asyncio.run(main())
Or even simpler with the quick API:
import { Agent, AnthropicProvider, createContext, userMessage } from '@orynx/agents';
const provider = new AnthropicProvider({
apiKey: process.env.ANTHROPIC_API_KEY!,
});
const agent = new Agent({
name: 'Assistant',
role: 'helpful assistant',
goal: 'Help users with questions',
provider,
});
const result = await agent.run(
createContext([userMessage('What is the capital of France?')])
);
console.log(result.final_message?.content);
# Create an agent
curl -X POST http://localhost:8000/api/agents \
-H "Content-Type: application/json" \
-d '{
"identity": {
"name": "Assistant",
"role": "helpful assistant",
"goal": "Help users with their questions"
},
"config": {
"model": "claude-sonnet-4-20250514",
"provider": "anthropic"
}
}'
# Run the agent
curl -X POST http://localhost:8000/api/agents/{agent_id}/run \
-H "Content-Type: application/json" \
-d '{
"context": {"messages": []},
"user_message": "What is the capital of France?"
}'
Current Status¶
Version: 0.1.8 (Alpha)
| Component | Status | Notes |
|---|---|---|
| Python Core | Agent, primitives, events, config | |
| Context Strategies (24+) | All strategies functional | |
| Handoff Protocol (18+) | Routing, workflow, context, security | |
| Graph Memory | Four-network cognitive model | |
| Safety Layer | Policy engine, supervisor, escalation, presets | |
| Audit System | Logging, tracking, compliance | |
| Orchestration | Sequential, parallel, hierarchical | |
| LLM Providers | Anthropic, OpenAI, HuggingFace, LiteLLM | |
| HTTP Server | FastAPI with SSE streaming | |
| TypeScript SDK | Client, builders, providers | |
| Rust Core | Context ops, graph, embeddings | |
| MCP Protocol | Client/server MCP support | |
| A2A Protocol | AgentCards, task exchange | |
| Workflow Primitives | Loop, conditional, pipeline agents | |
| Evals Framework | Metrics, runner, fine-tuning export | |
| Temporal Integration | Durable execution, checkpointing | |
| Reinforcement Learning | Q-learning, replay, DQN/A2C | |
| Tests | Full test coverage |