The protective shell for the AI brain.
Cryptographic identity. Policy enforcement. Tamper-evident audit trails. One config line. Sub-100ms. MCP-native.
AI agents are sending payments, accessing databases, calling external APIs, and executing real-world actions autonomously. Today, there is no standard way to prove what an agent did, whether it was authorised to do it, or if the record of what happened has been tampered with.
When an agent sends $50,000 to a fraudulent account, the company has no cryptographic proof of what happened, no evidence the action was unauthorised, and no defensible audit trail in court.
Enterprises want to deploy AI agents but can't — compliance teams need audit trails, risk teams need spending controls, and security teams need identity verification. Without these, the answer is no.
Every agent gets a unique Ed25519 keypair. Every action is signed. If an agent can't prove who it is, it can't act. Zero trust by default.
Spending limits, rate controls, action-type restrictions. Set per-agent policies that are enforced before execution, not audited after the fact.
Every verification decision is logged immutably. Logs are batched into Merkle trees and anchored to blockchain hourly. Not for debugging — for court.
Think of it as VISA — for AI agents. We don't execute actions. We verify they should happen. Identity, policy, proof.
{
"mcpServers": {
"inntris-guard": {
"command": "npx",
"args": ["inntris-mcp"],
"env": {
"INNTRIS_API_URL": "https://api.inntris.com",
"INNTRIS_AGENT_ID": "your-agent-id",
"INNTRIS_PRIVATE_KEY_B64": "your-private-key"
}
}
}
}
Before executing a sensitive action, the agent calls the Inntris verification endpoint.
Inntris verifies identity (Ed25519 signature), checks policy (limits, permissions), and scores trust (0-100).
Returns APPROVED with cryptographic token — or BLOCKED with reason.
Every decision is logged immutably and anchored to blockchain.
{
"verdict": "approved",
"trust_score": 87,
"approval_token": "eyJhbG...",
"audit_id": "550e8400-e29b-41d4-a716-446655440000",
"limits_remaining": {
"daily_limit_usd": "5000.00",
"daily_spent_usd": "1250.00",
"daily_remaining_usd": "3750.00"
}
}
{
"verdict": "blocked",
"verdict_reason": "Daily spending limit exceeded",
"trust_score": 42,
"policy_violated": "daily_limit_usd"
}
Industry standard. Same cryptography as Signal, SSH, and Solana. Every action is signed and verified.
Purpose-built for Anthropic's Model Context Protocol. Not adapted — designed from day one for the protocol that's becoming the standard.
Merkle trees batched hourly, anchored to Base L2. Cryptographic proof that records existed at a specific time and haven't been altered.
If verification fails, the action is blocked. Not logged and ignored — blocked. Defence in depth for autonomous systems.
Inntris is derived from the Scottish Gaelic Inntinn — meaning both "intellect" and "intention." The suffix draws from the Latin Cassis, the word for a metal helmet. The name encodes exactly what we build: a purposeful protective shell that safeguards the cognitive core of autonomous AI systems.
Inntris INC provides advanced security infrastructure and defensive protocols for artificial intelligence systems and third-party platforms. We serve as the protective layer that ensures the integrity of high-level cognitive models across diverse digital ecosystems.
We're onboarding a small group of companies building AI agents in production. You'll get direct access to the founding team, priority feature development, and early pricing. In return, we need your feedback, edge cases, and real-world agent workloads.
Inntris is built by Ronald Maduna, a Multi-Cloud Solutions Architect with certifications across AWS, Azure, GCP, and HashiCorp Terraform. After years of building enterprise cloud infrastructure at Deloitte and BMW IT Hub, he saw the gap: AI agents are going to production without the trust infrastructure enterprises require. Inntris is that infrastructure.
Connect on LinkedIn