Introduction
Lelu is a policy engine for AI-driven systems. It combines Rego-based authorization, confidence-aware decisioning, human approval queues, and auditable enforcement so teams can ship AI agents without giving up control.
Get Your Free API Key
Start testing Lelu instantly with no signup required. Generate an anonymous API key and get 500 requests per day.
Features
Lelu includes the core building blocks needed to govern AI actions in production, with simple defaults for development and stronger controls for enterprise workloads.
Framework Agnostic
Works with any AI framework or model provider. Integrate with OpenAI, Anthropic, or custom agents.
Confidence-Aware Policies
Author policies that branch on confidence score instead of binary allow/deny only.
Human-in-the-Loop
Automatically queue risky operations for reviewers before execution.
Account & Session Management
Track agent reputation, detect anomalies, and monitor behavioral patterns.
Observability & Tracing
OpenTelemetry integration with AI agent semantic conventions.
Multi-Agent Coordination
Support for delegation chains and swarm operations.
Installation
Get started with Lelu in less than 5 minutes.
Architecture
Learn how Lelu works under the hood.
Why Lelu?
Traditional authorization systems (like RBAC or ABAC) are binary: a user either has permission or they don't. But AI agents operate on probabilities. When an AI agent tries to execute a trade, delete a database, or send an email, you don't just want to know if it has permission—you want to know how confident it is.
How Lelu Works
Confidence-Aware Policies
Evaluate authorization requests with built-in confidence thresholds. Write rules that adapt to the AI's self-reported certainty.
Human-in-the-Loop
Automatically queue risky or low-confidence actions for human review. The AI pauses until a human approves or denies the request.
Audit Trail
Every decision, confidence score, and human approval is logged immutably for compliance and debugging.
Architecture
Model Context Protocol
Lelu provides an MCP server so you can use it with any AI model that supports the Model Context Protocol (MCP).
We provide a first-party MCP, powered by fastmcp. You can alternatively use zckly/mcp-server-lelu and other MCP providers.
CLI Options
Use the Lelu CLI to easily add the MCP server to your preferred client:
npx @lelu/mcp add --cursor
Manual Configuration
Alternatively, you can manually configure the MCP server for each client with the Lelu SSE endpoint:
claude mcp add --transport http lelu http://localhost:3003/sse
AI Tooling
Use Lelu tools from Cursor, Claude, or any MCP-compatible client.
LLMs.txt
Lelu supports the LLMs.txt standard for AI-friendly documentation.
Skills
Pre-built authorization patterns for common AI agent workflows.
MCP
Model Context Protocol support for seamless AI integration.
CLI Options
Command-line setup for Cursor, Claude Code, Open Code, and local development.
Manual Configuration
Full control over Lelu configuration for advanced use cases.
