Ruakiel is a multi-tenant AI persona platform built around a single constraint: security cannot be bolted on. Zero-knowledge encrypted conversation history, RBAC-enforced tool access, and a planning layer that ensures raw user input never reaches a tool-enabled execution agent.
When you give an AI agent access to tools, you create a new attack surface. Prompt injection, privilege escalation, data exfiltration — the risks are real and the existing frameworks weren't designed with security in mind. They were designed to ship fast.
Ruakiel was designed from first principles around one question: what would it take for an enterprise to trust an AI agent with real access to real systems?
Raw user input enters the planning layer. It is transformed, sanitized, and converted into structured objectives. No raw input ever reaches the execution agent — this architectural separation is the first line of defense against prompt injection.
Every tool call is checked against the caller's role before execution. Permissions are defined at the tenant level and enforced at the control plane — not in application code. No role, no execution. No exceptions.
After authorization, the agent executes its plan. Conversation outputs and persona memories are written with AES-256-GCM zero-knowledge encryption — keys derived from the session, never persisted. Every action is written to an immutable audit trail.
Security isn't a feature you toggle on. It's the architecture. Every component of Ruakiel is designed around the assumption that agents will be attacked.
The planning layer transforms user input into structured objectives before any execution agent sees it. There is no direct path for raw user input to reach a tool-enabled agent. This isn't a runtime filter — it's separation of concerns enforced at the infrastructure level.
Permissions are not application-level suggestions. They are enforced at the control plane before a tool call is ever attempted. Define roles, assign tools, and trust that unauthorized calls simply won't happen.
Conversation history and persona memories are encrypted at rest with AES-256-GCM. Encryption keys are derived from your session and zeroized after use — the platform never persists plaintext user data.
Every plan, authorization check, and tool execution is logged immutably. Know exactly what your agents did, when, and why — queryable, exportable, and yours.
Tools are registered via MCP and assigned to personas at the tenant level. Role-based permissions determine who can call what. The platform handles authorization, invocation, and logging — you define the rules.
The plan/execute barrier is not a feature. It is the core architectural decision that makes every other guarantee possible.
Ruakiel is in private beta. We're onboarding teams who take AI security seriously. If you're building AI personas or deploying agents with real tool access and need enterprise-grade encryption, RBAC, and audit trails built in — we want to talk.