VORIONVORION
Every AI governance framework today watches inputs and outputs. None watches the agent.

Govern What Agents Are,
Not Just What They Do

Governance that travels with the agent — not bolted on after. Five layers of enforcement. Internal-state integrity monitoring. Cryptographic proof chains. The only governance framework that extends trust from the I/O boundary to the parameter space — stopping compromised agents before their first malicious action.

Get started in 4 lines:

import { createTrustEngine } from '@vorionsys/atsf-core';

const engine = createTrustEngine();
await engine.initializeEntity('agent-001', 2); // start at T2

const callback = engine.createCallback('agent-001');
await agent.invoke(input, { callbacks: [callback] }); // unchanged

Works with LangChain, CrewAI, AutoGen, or any callback-compatible agent framework.

Why Vorion Is Different

Every other governance framework watches the door. Vorion watches the building.

Internal-State Monitoring

Your agent’s neural network weights have a shape — a fingerprint. Vorion takes a compact cryptographic snapshot of that shape using SVD decomposition. If anything changes — a fine-tuning attack, a swapped checkpoint, a poisoned adapter — the fingerprint changes before the agent’s behavior does. You catch the compromise before the first malicious action, not after.

Think of it like a tamper seal on a medicine bottle. You don’t need to test every pill — you just check if the seal is broken.

Pre-Action Gating

Most governance checks happen after an agent acts. Vorion checks before. Every action is classified across 5 risk levels and compared against the agent’s trust tier threshold before any capability is invoked. A compromised agent is stopped before its first malicious action — not after the damage is done.

Think of it like a key card system. You don’t investigate after someone enters the server room — you check their badge before the door opens.

Cryptographic Proof Chain

Every governance decision — every trust score change, every gate check, every circuit breaker trip — is SHA-256 hash-linked to the one before it. Tamper with one record and every subsequent hash breaks, visibly and irreversibly. Any third party can verify the chain with a public key.

Think of it like a blockchain for governance decisions — except it’s just hash linking, which means it’s fast and doesn’t need consensus.

Choose Your Path

Whether you're building, evaluating, or contributing — we'll get you where you need to go.

I Want to Build

Start integrating trust scoring and governance into your AI agents.

npm install @vorionsys/atsf-core View on GitHub API Documentation

I Want to Evaluate

Explore how Vorion fits your organization's AI governance needs.

I Want to Contribute

Join the community shaping the future of AI governance.

Need something else? hello@vorion.org

Five Layers of Defense

Every agent action passes through five independent enforcement layers — each one a hard gate. No single layer bypass compromises the system. This is defense in depth, not defense in hope.

L0

Velocity

< 2ms

Three-tier rate limiting — burst, sustained, and hourly caps per entity per trust tier. In-memory counters, zero I/O. Stops flooding before anything else runs.

L1

Tripwires

< 5ms

60+ compiled regex patterns firing before the LLM sees anything. Deterministic, absolute — no trust level overrides these. Catches injection attacks, prompt manipulation, and known exploit patterns in microseconds.

L2

AI Critic

< 3s

Multi-provider adversarial AI evaluation. Dispatches to multiple LLM providers in parallel — first agreeing responses win. Catches subtle manipulation that regex can't detect.

L3

Policy Engine

< 10ms

YAML/JSON constraint evaluation against trust-tier-specific policies. Expression evaluator checks capability gates, risk thresholds, and custom organizational rules.

L4

Circuit Breaker

< 1ms

System-level halt on cascading failures. CLOSED → OPEN → HALF_OPEN state machine. No single agent can bypass it. Failure threshold triggers automatic cooldown with probe-based recovery.

Total p99 latency: < 3.5 seconds — the LLM critic is the bottleneck. Everything else is sub-millisecond.

Beyond I/O Governance

The only governance framework that extends trust enforcement from the behavioral domain to the internal-state domain — governing what agents are, not just what they do.

Internal-State Monitoring

SVD-based fingerprinting of neural network weight tensors produces a compact cryptographic signature of an agent's parameter configuration. Changes — from fine-tuning attacks to checkpoint substitution to adapter poisoning — are detected before they produce visible behavioral changes.

ParamesphereLayer 0: SENSE
Learn more

Treacherous Turn Defense

Pre-Action Gating checks trust before execution — not after. A compromised agent is stopped before its first malicious action. The gate classifies risk across 5 levels and compares against trust tier thresholds before any capability is invoked.

Pre-Action GateATSF §4.4
Learn more

Asymmetric Trust Dynamics

Trust is logarithmically gained and exponentially lost. Tier-scaled failure penalties — 3× at T0 through 10× at T7 — mean that higher-trusted agents face proportionally harsher consequences. 182-day decay with a 50% floor ensures continuous proof of reliability.

3×–10× Penalty Scale182-day Decay50% Floor
Learn more

Multi-Validator Consensus

For critical decisions, Triple Modular Redundancy spawns parallel inference replicas. Divergence between them exponentially derates the composite trust score — the same fault tolerance principle used in avionics, applied to AI governance.

TMR Consensus5-Level Degradation
Learn more

Predictive Risk Forecasting

Monte Carlo simulations run stochastic failure projections across configurable horizons. Results classify into four risk bands (GREEN/YELLOW/ORANGE/RED) that proactively derate trust and shorten operational horizons before failures occur.

Monte Carlo4-Band Classification

Evolutionary Self-Healing

Governance parameters evolve over time using genetic algorithms with tournament selection, crossover, and mutation. Phased blending (10% → 50% → 100%) applies evolved parameters gradually. Human approval gates prevent unchecked drift. The system optimizes itself.

Genetic AlgorithmPhased Blending

Cryptographic Proof Chain

Every enforcement decision is SHA-256 hash-linked to the one before it, with optional Ed25519 signatures. This isn't an append-only log — it's a tamper-evident chain. Modify one record and every subsequent hash breaks. The entire history of agent governance is cryptographically verifiable by any third party with a public key.

Works With Your Stack

Callback-based integration means no architectural changes required.

TypeScript · any frameworkCustom Agents
import { createTrustEngine } from '@vorionsys/atsf-core';

const engine = createTrustEngine();
await engine.initializeEntity('agent-001', 2);

// Your existing code — unchanged
const callback = engine.createCallback('agent-001');
await agent.invoke(input, { callbacks: [callback] });

CrewAI

const callback = engine.createCallback('crew-agent');
const crew = new Crew({
  agents: [researcher, writer],
  callbacks: [callback],
});
await crew.kickoff({ topic: 'quarterly report' });

Any Agent

// Wrap any async function with governance
const governed = engine.govern('custom-agent', async (ctx) => {
  const result = await myAgent.run(ctx.input);
  return result;
});
await governed({ input: 'analyze this dataset' });

See It In Action

Watch how Vorion evaluates and governs AI agent actions in real-time.

Vorion
Trust ScoreT4 Standard
687/ 1000
20
npm Packages
5
Enforcement Layers
18,500+
Automated Tests
1
Patent Filed
< 3.5s
p99 Pipeline Latency
Patent PendingNIST AI RMF AlignedEU AI Act AlignedSOC 2 MappedApache-2.0 Licensed

Built for Compliance

Vorion maps to the frameworks your compliance team already cares about.

NIST AI RMF
Aligned
EU AI Act
Aligned
ISO 42001
Mapped
SOC 2 Type II
Mapped
GDPR
Aligned

On the roadmap: NIST AI 600-1, NIST CAISI, ISO 42005, CISA AI Guidelines — all targeting 2026. EU AI Act high-risk enforcement begins August 2026.

Get in Touch

Whether you're evaluating Vorion for your organization, interested in partnership, or want to contribute — we'd love to hear from you.