Pre-revenue · Seeking design partners

Verification Infrastructure for AI Agents

The protective shell for the AI brain.

Cryptographic identity. Policy enforcement. Tamper-evident audit trails. One config line. Sub-100ms. MCP-native.

The problem

AI agents are acting. Nobody's verifying.

AI agents are sending payments, accessing databases, calling external APIs, and executing real-world actions autonomously. Today, there is no standard way to prove what an agent did, whether it was authorised to do it, or if the record of what happened has been tampered with.

When an agent sends $50,000 to a fraudulent account, the company has no cryptographic proof of what happened, no evidence the action was unauthorised, and no defensible audit trail in court.

Enterprises want to deploy AI agents but can't — compliance teams need audit trails, risk teams need spending controls, and security teams need identity verification. Without these, the answer is no.

The solution

The trust layer AI agents need.

Cryptographic Identity

Every agent gets a unique Ed25519 keypair. Every action is signed. If an agent can't prove who it is, it can't act. Zero trust by default.

Policy Enforcement

Spending limits, rate controls, action-type restrictions. Set per-agent policies that are enforced before execution, not audited after the fact.

Tamper-Evident Audit Trail

Every verification decision is logged immutably. Logs are batched into Merkle trees and anchored to blockchain hourly. Not for debugging — for court.

Think of it as VISA — for AI agents. We don't execute actions. We verify they should happen. Identity, policy, proof.

Integration

One config line. Sub-100ms overhead.

mcp_config.json
{
  "mcpServers": {
    "inntris-guard": {
      "command": "npx",
      "args": ["inntris-mcp"],
      "env": {
        "INNTRIS_API_URL": "https://api.inntris.com",
        "INNTRIS_AGENT_ID": "your-agent-id",
        "INNTRIS_PRIVATE_KEY_B64": "your-private-key"
      }
    }
  }
}
1

Agent calls inntris_guard

Before executing a sensitive action, the agent calls the Inntris verification endpoint.

2

Verify, check, score

Inntris verifies identity (Ed25519 signature), checks policy (limits, permissions), and scores trust (0-100).

3

Approve or block

Returns APPROVED with cryptographic token — or BLOCKED with reason.

4

Log immutably

Every decision is logged immutably and anchored to blockchain.

Average verification: <50ms p95. Your agents don't slow down.
Product

What verification looks like.

APPROVED
{
  "verdict": "approved",
  "trust_score": 87,
  "approval_token": "eyJhbG...",
  "audit_id": "550e8400-e29b-41d4-a716-446655440000",
  "limits_remaining": {
    "daily_limit_usd": "5000.00",
    "daily_spent_usd": "1250.00",
    "daily_remaining_usd": "3750.00"
  }
}
BLOCKED
{
  "verdict": "blocked",
  "verdict_reason": "Daily spending limit exceeded",
  "trust_score": 42,
  "policy_violated": "daily_limit_usd"
}
Trust stack

Built for production. Auditable by design.

Ed25519 Signatures

Industry standard. Same cryptography as Signal, SSH, and Solana. Every action is signed and verified.

MCP-Native

Purpose-built for Anthropic's Model Context Protocol. Not adapted — designed from day one for the protocol that's becoming the standard.

Blockchain-Anchored Audit

Merkle trees batched hourly, anchored to Base L2. Cryptographic proof that records existed at a specific time and haven't been altered.

Fail-Closed Design

If verification fails, the action is blocked. Not logged and ignored — blocked. Defence in depth for autonomous systems.

The name

Purpose protecting intellect.

Inntris is derived from the Scottish Gaelic Inntinn — meaning both "intellect" and "intention." The suffix draws from the Latin Cassis, the word for a metal helmet. The name encodes exactly what we build: a purposeful protective shell that safeguards the cognitive core of autonomous AI systems.

Inntris INC provides advanced security infrastructure and defensive protocols for artificial intelligence systems and third-party platforms. We serve as the protective layer that ensures the integrity of high-level cognitive models across diverse digital ecosystems.

Early access

Looking for 5 design partners.

We're onboarding a small group of companies building AI agents in production. You'll get direct access to the founding team, priority feature development, and early pricing. In return, we need your feedback, edge cases, and real-world agent workloads.

What you get

  • Full platform access (free during partner period)
  • Direct Slack/Discord channel with engineering team
  • Your use cases shape the product roadmap
  • Priority support and custom integrations

Who we're looking for

  • Companies running AI agents in production or preparing to
  • Engineering teams that care about agent accountability
  • Any framework: LangChain, CrewAI, custom MCP, or your own stack
Apply for Design Partner Program
Specifications

Under the hood.

Signatures Ed25519 (NaCl/libsodium)
Protocol Model Context Protocol (MCP)
Audit Storage PostgreSQL + Merkle Trees
Blockchain Base L2 (Ethereum)
Latency <50ms average verification
Throughput 10,000+ transactions/hour
SDKs Python, Node.js, Go, Rust
Deployment Railway, Render, Docker, K8s
Compliance GDPR-ready, 7-year audit retention
Team

Built by infrastructure engineers.

Inntris is built by Ronald Maduna, a Multi-Cloud Solutions Architect with certifications across AWS, Azure, GCP, and HashiCorp Terraform. After years of building enterprise cloud infrastructure at Deloitte and BMW IT Hub, he saw the gap: AI agents are going to production without the trust infrastructure enterprises require. Inntris is that infrastructure.

Connect on LinkedIn