Executive Summary
The VeritasChain Standards Organization (VSO) announces the official release of CAP (Content / Creative AI Profile) v1.0, the world's first open specification for cryptographic audit trails in AI content generation systems.
CAP v1.0 introduces Safe Refusal Provenance (SRP), a breakthrough mechanism that enables AI platforms to create independently verifiable proof of content moderation decisions—including the critical ability to prove what content was refused to generate.
Contents
The Imperative for Verification-Based AI Accountability
The Collapse of Trust-Based Governance
The AI industry has operated under a trust-based governance model: platforms implement content moderation systems, publish transparency reports claiming certain refusal rates, and stakeholders accept these claims at face value. Recent events have demonstrated the fundamental inadequacy of this approach.
When AI platforms face scrutiny regarding harmful content generation, the response invariably follows a predictable pattern: "Our safeguards are working." Yet no mechanism exists for independent parties—regulators, auditors, researchers, or the public—to verify such claims.
| Stakeholder | Impact of Unverifiable Claims |
|---|---|
| Regulators | Cannot enforce compliance without evidence |
| Auditors | Cannot conduct meaningful assessments |
| Platforms | Responsible actors cannot differentiate from irresponsible ones |
| Users | Cannot make informed platform choices |
| Society | Cannot assess true state of AI safety |
The Negative Proof Problem
The technical root of this accountability gap lies in what we term the Negative Proof Problem: AI systems can readily prove what they generated (the output exists as evidence), but cannot prove what they refused to generate.
Traditional logging architectures record successful operations. When a generation request is blocked, the "evidence" consists merely of a database entry stating that blocking occurred. Such entries can be created retroactively, selectively generated, and cannot be independently verified.
Regulatory Recognition
| Regulation | Requirement | Implementation Gap |
|---|---|---|
| EU AI Act Article 12 | Automatic logging for high-risk AI | No standard format specified |
| DSA Article 37 | Independent audits for VLOPs | Audit methodology undefined |
| Colorado AI Act | Impact assessment documentation | Verification mechanism absent |
| TAKE IT DOWN Act | NCII removal evidence | Evidence standards unspecified |
CAP v1.0: Technical Architecture
Design Philosophy
"Verify, Don't Trust" (検証せよ、信頼するな)
In high-stakes domains—content moderation affecting human safety and dignity—trust must be earned through verifiable evidence, not assumed through claims.
Specification Scope
What CAP Provides
- Data format for tamper-evident AI audit trails
- Mechanism for proving AI decisions, including refusals
- Infrastructure for third-party verification
- Interoperability with C2PA, SCITT
What CAP Does Not Provide
- Content filtering or moderation logic
- Real-time intervention capabilities
- Judgment on content appropriateness
- Replacement for existing provenance standards
CAP operates as a system-level audit infrastructure, complementing asset-level provenance standards such as C2PA. Where C2PA answers "Is this content authentic?", CAP answers "What decisions did the AI system make?"
Safe Refusal Provenance (SRP)
The defining innovation of CAP v1.0 is Safe Refusal Provenance (SRP), which transforms non-generation into a first-class, cryptographically provable event.
Event Architecture
Request Receipt
│
▼
┌──────────────────┐
│ GEN_ATTEMPT │ ◄── Logged BEFORE safety evaluation
│ (mandatory) │
└────────┬─────────┘
│
▼
┌──────────────────┐
│ Safety Analysis │
└────────┬─────────┘
│
┌────┴────┬─────────────┐
│ │ │
▼ ▼ ▼
┌───────┐ ┌────────┐ ┌───────────┐
│ GEN │ │GEN_DENY│ │ GEN_ERROR │
│ (pass)│ │(refuse)│ │ (failure) │
└───────┘ └────────┘ └───────────┘
The critical architectural decision: GEN_ATTEMPT is logged before safety evaluation occurs. This creates an unforgeable record of request existence, independent of outcome.
The Completeness Invariant
∑ GEN_ATTEMPT = ∑ GEN + ∑ GEN_DENY + ∑ GEN_ERROR
This equation must hold for any time window. Its implications are profound:
| Condition | Implication |
|---|---|
| Attempts > Outcomes | Missing outcome records (potential concealment) |
| Outcomes > Attempts | Fabricated outcome records |
| Equality Maintained | Audit trail complete |
Integrity Layer
CAP v1.0 implements a three-tier integrity architecture:
1. Hash Chain
Events are linked through SHA-256 hash chains. Each event contains the hash of the previous event, creating an append-only structure where any modification invalidates all subsequent hashes.
2. Digital Signatures
All events are signed using Ed25519 (RFC 8032), providing authentication of event origin, non-repudiation of logged decisions, and tamper detection at the event level.
3. External Anchoring
Merkle roots of event batches are anchored to external timestamping services:
| RFC 3161 TSA | Traditional PKI timestamping |
| SCITT | IETF transparency services |
| Blockchain | Bitcoin/Ethereum anchoring |
Conformance Levels
Bronze Level
Target: SMEs, early adopters, internal deployments
- Event Logging: INGEST, TRAIN, GEN, EXPORT
- Hash Chain: SHA-256
- Signatures: Ed25519
- Retention: 6 months minimum
Silver Level
Target: Enterprise platforms, VLOPs, regulated entities
- All Bronze requirements
- SRP Extension: GEN_ATTEMPT, GEN_DENY, GEN_ERROR
- Completeness Invariant: Enforced and verifiable
- External Anchoring: Daily minimum (RFC 3161)
- Evidence Pack: Structured export format
- Retention: 2 years minimum
Gold Level
Target: High-risk AI systems, DSA Category 1 platforms
- All Silver requirements
- Real-time Anchoring: Hourly maximum delay
- SCITT Integration: Transparency service participation
- HSM Key Management: Hardware security modules
- Audit API: Programmatic auditor access
- Retention: 5 years minimum
Privacy-Preserving Verification
A critical design challenge addressed by CAP v1.0 is enabling verification without exposing harmful content. The specification achieves this through hash-based attestation.
┌─────────────────────────────────────────────────────────────────┐ │ │ │ Auditor/Regulator Platform │ │ ───────────────── ──────── │ │ │ │ 1. Receives complaint containing Maintains CAP audit │ │ harmful prompt trail │ │ │ │ 2. Computes locally: │ │ hash = SHA256(prompt) │ │ │ │ 3. Queries platform: │ │ "GEN_DENY exists with │ │ PromptHash = hash?" │ │ │ │ 4. Platform returns: Platform searches │ │ - Existence confirmation for matching hash │ │ - Merkle proof │ │ - External anchor proof │ │ │ │ 5. Auditor verifies independently: │ │ - Merkle proof validity │ │ - Anchor timestamp authenticity │ │ │ │ Result: │ │ - Auditor proves refusal occurred │ │ - Platform never sees original complaint │ │ - Other events remain private │ │ │ └─────────────────────────────────────────────────────────────────┘
This architecture enables regulatory verification while maintaining:
- User privacy: prompts are hashed, not stored in plaintext
- Platform operational confidentiality: bulk logs not exposed
- Evidentiary integrity: cryptographic proofs, not trust
Regulatory Alignment
EU AI Act Article 12
CAP v1.0 Silver level satisfies EU AI Act logging requirements for high-risk AI systems:
| Article 12 Requirement | CAP v1.0 Implementation |
|---|---|
| "Automatic recording of events" | Event-driven logging architecture |
| "Over the lifetime of the system" | ChainID continuity from genesis |
| "Traceability of operation" | Hash chain, UUIDv7 ordering |
| "Monitoring of operation" | Evidence Pack export capability |
| "Minimum of six months" retention | Silver: 2 years; Gold: 5 years |
Compliance Timeline: Organizations subject to EU AI Act high-risk AI obligations (effective August 2, 2026) should implement CAP Silver level to demonstrate Article 12 compliance.
Digital Services Act
For Very Large Online Platforms (VLOPs) subject to DSA Article 37 independent audits, CAP Gold level provides:
| DSA Requirement | CAP v1.0 Support |
|---|---|
| Independent audit accessibility | Structured Evidence Pack format |
| Algorithm transparency documentation | ModelVersion, PolicyID tracking |
| Content moderation records | GEN_DENY events with RiskCategory |
| Risk mitigation effectiveness | Completeness Invariant verification |
GDPR Compliance
CAP v1.0 includes crypto-shredding support for GDPR compliance:
- Sensitive fields encrypted with per-user keys
- Key deletion renders personal data unrecoverable
- Hash chain integrity preserved (hashes remain valid)
- Audit trail structure maintained without personal data exposure
Ecosystem Integration
C2PA Complementarity
| Aspect | C2PA | CAP |
|---|---|---|
| Primary Question | "Is this content authentic?" | "What did the AI system decide?" |
| Scope | Individual assets | System-wide behavior |
| Attachment | Embedded in content | System audit infrastructure |
| Metaphor | Content passport | System flight recorder |
VAP Framework Relationship
CAP v1.0 is a domain-specific profile within the Verifiable AI Provenance (VAP) Framework v1.2:
| Profile | Domain | Status |
|---|---|---|
| VCP | Algorithmic Trading | v1.1 Released |
| CAP | Content Generation | v1.0 Released |
| DVP | Autonomous Vehicles | Draft |
| MAP | Medical AI | Planned |
Implementation Guidance
Minimum Implementation Path (Bronze)
- Event Generation: Create CAP events for INGEST, TRAIN, GEN, EXPORT operations
- Hash Chain: Link events via SHA-256 PrevHash references
- Digital Signatures: Sign all events with Ed25519
- Storage: Retain events for minimum 6 months
- Verification: Implement chain integrity verification
Silver Level Path
- SRP Events: GEN_ATTEMPT before safety evaluation, GEN_DENY/GEN_ERROR/GEN for outcomes
- Completeness Verification: Automated invariant checking
- External Anchoring: Daily Merkle root submission to RFC 3161 TSA
- Evidence Pack: Export capability for audit requests
Reference Implementation
Conclusion
The release of CAP v1.0 marks a significant milestone in AI accountability infrastructure. For the first time, AI platforms have access to an open, standardized specification for creating audit trails that can prove not just what they generated, but what they refused to generate.
The regulatory landscape is clear: EU AI Act, Digital Services Act, Colorado AI Act, and emerging global frameworks all require AI transparency. CAP v1.0 provides the technical foundation to meet these requirements with cryptographic rigor rather than mere compliance claims.
The choice facing AI platforms is no longer whether to implement auditable logging, but how.
CAP v1.0 offers an open, interoperable, and verification-ready answer.
Resources
Contact
- General: standards@veritaschain.org
- Media: media@veritaschain.org
- Technical: technical@veritaschain.org
About VeritasChain Standards Organization
VeritasChain Standards Organization (VSO) is a vendor-neutral international standards body dedicated to developing cryptographic audit specifications for AI and algorithmic systems. Founded on the principle of "Verify, Don't Trust," VSO creates open specifications enabling independent verification of AI system behavior across finance, content generation, autonomous systems, and other high-stakes domains.