Skip to content

Mantrix Agents

Multi-agent AI framework with advanced context sharing, graph memory, and typed handoffs.

Get Started API Reference LLM Reference


24+ Context Strategies

The most comprehensive context management in any agent framework. Basic, memory, filtering, optimization, intelligence, and meta-strategies.

Learn more

18+ Handoff Strategies

Intelligent routing, workflow patterns, context management, and security strategies for safe agent-to-agent communication.

Learn more

Four-Network Memory

Cognitive-science-inspired graph memory with Facts, Experiences, Summaries, and Beliefs networks.

Learn more

Safety & Compliance

Policy engine, supervisor process, and human escalation with built-in presets for Strict, GDPR, HIPAA, and Financial compliance.

Learn more

Multi-SDK Support

Python-first with TypeScript bindings, Go FFI, and an optional high-performance Rust core providing 2-10x speedups.

Compare SDKs

Full Observability

OpenTelemetry tracing, step-by-step execution history, event bus, and real-time audit streaming via WebSocket.

Learn more

MCP Protocol

Client/server support for the Model Context Protocol. Connect to MCP tool servers or expose agents as MCP tools.

Learn more

A2A Protocol

Agent-to-Agent protocol for cross-system agent communication via AgentCards and task exchange.

Learn more

Reinforcement Learning

3-tier RL system — tabular Q-learning, trajectory/experience replay, optional neural policies (DQN, A2C).

Learn more

Workflow Primitives

Composable agent patterns — LoopAgent, ConditionalAgent, PipelineAgent with AgentLike protocol.

Learn more

Evals Framework

Built-in evaluation suite with metrics, concurrent runner, and fine-tuning export (DPO/SFT/RLHF).

Learn more

Durable Execution

Temporal integration for crash-resilient agent workflows with automatic checkpointing.

Learn more


Quick Example

import asyncio
from orynx import Agent, AgentIdentity, Context
from orynx.providers import LLMAdapter

async def main():
    agent = Agent(
        identity=AgentIdentity(
            name="Assistant",
            role="helpful assistant",
            goal="Help users with their questions",
        ),
        provider=LLMAdapter(
            model="claude-sonnet-4-20250514",
            provider="anthropic",
        ),
    )

    result = await agent.run(
        Context.create(),
        "What is the capital of France?"
    )
    print(result.final_message.content)

asyncio.run(main())

Or even simpler with the quick API:

import asyncio
import orynx

result = asyncio.run(orynx.quick("What is the capital of France?", model="claude-sonnet-4-20250514"))
print(result.final_message.content)
import { Agent, AnthropicProvider, createContext, userMessage } from '@orynx/agents';

const provider = new AnthropicProvider({
  apiKey: process.env.ANTHROPIC_API_KEY!,
});

const agent = new Agent({
  name: 'Assistant',
  role: 'helpful assistant',
  goal: 'Help users with questions',
  provider,
});

const result = await agent.run(
  createContext([userMessage('What is the capital of France?')])
);
console.log(result.final_message?.content);
# Create an agent
curl -X POST http://localhost:8000/api/agents \
  -H "Content-Type: application/json" \
  -d '{
    "identity": {
      "name": "Assistant",
      "role": "helpful assistant",
      "goal": "Help users with their questions"
    },
    "config": {
      "model": "claude-sonnet-4-20250514",
      "provider": "anthropic"
    }
  }'

# Run the agent
curl -X POST http://localhost:8000/api/agents/{agent_id}/run \
  -H "Content-Type: application/json" \
  -d '{
    "context": {"messages": []},
    "user_message": "What is the capital of France?"
  }'
# Interactive chat
orynx chat --provider anthropic --model claude-sonnet-4-20250514

# Single query
orynx run "What is the capital of France?" --provider anthropic

Current Status

Version: 0.1.8 (Alpha)

Component Status Notes
Python Core ✅ Implemented Agent, primitives, events, config
Context Strategies (24+) ✅ Implemented All strategies functional
Handoff Protocol (18+) ✅ Implemented Routing, workflow, context, security
Graph Memory ✅ Implemented Four-network cognitive model
Safety Layer ✅ Implemented Policy engine, supervisor, escalation, presets
Audit System ✅ Implemented Logging, tracking, compliance
Orchestration ✅ Implemented Sequential, parallel, hierarchical
LLM Providers ✅ Implemented Anthropic, OpenAI, HuggingFace, LiteLLM
HTTP Server ✅ Implemented FastAPI with SSE streaming
TypeScript SDK ✅ Implemented Client, builders, providers
Rust Core ✅ Implemented Context ops, graph, embeddings
MCP Protocol ✅ Implemented Client/server MCP support
A2A Protocol ✅ Implemented AgentCards, task exchange
Workflow Primitives ✅ Implemented Loop, conditional, pipeline agents
Evals Framework ✅ Implemented Metrics, runner, fine-tuning export
Temporal Integration ✅ Implemented Durable execution, checkpointing
Reinforcement Learning ✅ Implemented Q-learning, replay, DQN/A2C
Tests ✅ 680 passing Full test coverage