Your AI agents are not EU AI Act compliant.
Here's how to fix it before August 2.
The EU AI Act goes into full enforcement on August 2, 2026. For AI agents, compliance means 6 months of immutable, tamper-resistant behavioral audit trails — and that clock started in February. If you're reading this and haven't deployed logging infrastructure, you're already late.
What the EU AI Act actually requires for AI agents
Article 12: Automatic logging is mandatory
High-risk AI systems must automatically log all operations, inputs that influence outputs, and every decision made with real-world consequences. Autonomous agents making financial transactions, procurement decisions, or customer-facing actions are high-risk by default.
Logs must be tamper-resistant
Standard database logs can be altered. The regulation requires that audit trails cannot be modified after the fact. A Merkle-chain or cryptographically signed log satisfies this. A CSV file or plain database table does not.
6 months of operational history required on Day 1
Conformity assessments require pre-enforcement operational data. The practical implication: to pass an audit on August 2, your agents needed to start generating compliant logs by February 2. If you haven't started, deploy immediately — every week of delay is a week of missing evidence.
Fines: up to €35M or 7% of global annual turnover
For serious violations. Even lower-tier violations (insufficient documentation, missing logs) carry €15M / 3% of turnover. For a $100M ARR company, that's $3M for missing a log file.
The EU AI Act compliance checklist for AI agents
Check each requirement against your current stack
Every agent action produces a log entry
Inputs, outputs, decisions — automatically recorded, not manually added
Log entries are cryptographically signed and tamper-evident
Merkle tree or equivalent — any post-hoc modification is detectable
Behavioral anomaly detection is in place
You can prove your agent was behaving consistently — deviations are flagged automatically
Agent identity is cryptographically stable
You can prove the same agent ran the same workflow — not an impostor or hijacked instance
Human-in-the-loop (HITL) approval is available for high-risk decisions
Agents above a spend/action threshold must pause and require human sign-off
Continuous behavioral scoring (not point-in-time)
Ongoing trust score that degrades with anomalous behavior — not a one-time certification
Logs are queryable and exportable for auditors
Your compliance team can pull the full history for any agent, any time window, in a format auditors accept
If you can't check all 7, you are not compliant. MnemoPay checks all 7 automatically from the moment you add 5 lines of code.
How MnemoPay satisfies every requirement
Every transaction is hashed into a Merkle tree. Any post-hoc modification to the log is immediately detectable. This is the tamper-resistant evidence Article 12 requires.
Exponentially-weighted moving average detects behavioral drift in real time. When your agent deviates from baseline, it fires before a rogue action completes.
Ed25519 keypairs give each agent a stable, verifiable identity. The same agent that ran yesterday's workflow is provably the same one running today's — or it isn't, and you know immediately.
Set spend thresholds that pause execution and require human sign-off. Configurable per-agent, per-action type, or globally. Satisfies the "meaningful human oversight" requirement.
Continuous behavioral trust score — not a one-time audit snapshot. Regulators and auditors can see a rolling 6-month behavioral history, not just a compliance checkbox.
Pull the complete log for any agent, any time range, in JSON or CSV. Hand it directly to an auditor or compliance team during a conformity assessment.
5 lines. Full compliance infrastructure.
import { MnemoPay } from '@mnemopay/sdk';
// One instance per agent — Merkle trail starts immediately
const mp = MnemoPay.quick('procurement-agent-01');
// Every transaction is logged, signed, and anomaly-checked
const tx = await mp.charge(500, 'Invoice #8842 — office supplies');
const score = await mp.agentFicoScore(); // 300–850 behavioral score
const audit = await mp.exportAuditLog(); // hand to auditor
Also satisfies NIST AI Agent Standards (Feb 2026)
NIST launched the Collaborative AI System Integration (CAISI) initiative on February 17, 2026, establishing behavioral governance standards for enterprise AI agents. SOC 2 audits now scrutinize AI agent access patterns. MnemoPay's behavioral memory, identity registry, and Merkle audit trail satisfy NIST CAISI requirements out of the box.
EU AI Act
✓ Article 12
NIST CAISI
✓ Feb 2026
SOC 2 AI
✓ Trust Criteria
⚠ August 2, 2026 —
Start generating compliant logs today arrow_forwardApache 2.0 · Free forever · 5 lines of code