Evidence Framework for AI in Creative Industries
"Leave Evidence, Not Barriers"
— The problem is not AI itself, but its black-box nature
" AI is already widely used in content industries, and stopping its use is not realistic. What matters is leaving cryptographically undeniable records of "who," "what," "with what authority," and "when" assets were ingested, trained, generated, or exported — enabling post-hoc verification when disputes arise. "
Rights infringement. Confidential leaks. Personality violations. Deepfakes.
When disputes arise, how can you prove what happened in the AI workflow?
CAP provides the answer — a tamper-proof evidence trail for creative AI.
CAP Specification v0.2
VSO-CAP-SPEC-001 — Full Technical Specification
Safe Refusal Provenance PoC
Verifiable evidence pack for refused requests
Structural challenges in AI-powered creative workflows
Unable to trace rights basis for materials fed to AI. When disputes arise, proving legitimate use becomes impossible.
Unclear whether consent was given for training/generation. Voice cloning, likeness use, and style mimicry happen without proper authorization records.
Unreleased materials lack classification management. Pre-release characters, designs, footage ingested into AI without confidentiality controls.
| Challenge | Description | CAP Solution |
|---|---|---|
| Rights Provenance Opacity | Unable to trace rights basis for materials fed to AI | RightsBasis field records rights justification |
| Consent Ambiguity | Unclear whether consent was given for training/generation | ConsentBasis captures consent state |
| Confidentiality Gap | Unreleased materials lack classification management | ConfidentialityLevel for classification |
| Accountability Boundaries | Cannot identify responsible parties when disputes arise | User/Role records actors |
| Tampering Possibility | Records can be altered post-facto | Hash Chain provides cryptographic integrity |
CAP application scope by industry priority
Games
AAA / Publishers / Studios / Outsourcing
Risk: IP dilution, confidential leaks, character mimicry
Film / Animation / Streaming
Production / VFX / Post-production / OTT
Risk: Actor likeness, voice cloning, unreleased footage leaks
Publishing
Manga / Books / Editorial Production
Risk: Style mimicry, manuscript leaks, translation quality
Music
Labels / Distribution / MV Production
Risk: Voice cloning, song imitation, rights processing
Adult Content
Production / Distribution / Platforms
Risk: Deepfakes, non-consensual generation
Corporate Branding
Web / IR / Design
Risk: Tone mimicry, brand dilution
Five categories of threats CAP addresses
Unique style/worldview easily mimicked by AI. Brand value erosion, market differentiation difficulty.
Third parties use AI trained on proprietary materials. Near-identical generated outputs appear in market.
Unreleased characters/designs/footage/code fed to AI. Leakage via training data pathways.
Private images/video used without consent. Defamation, sexual content, harassment.
Corporate web/IR/design tone imitated. Generated content mistaken for official.
Core events and SRP extension events in the creative AI lifecycle
| Code | Event Type | Phase | Description | Required |
|---|---|---|---|---|
| 0x0100 | INGEST | Input | Asset ingestion into AI workflow | SHOULD |
| 0x0200 | TRAIN | Training | Model training/fine-tuning | MAY |
| 0x0300 | GEN | Generation | Content generation (allowed) | MUST |
| 0x0310 | GEN_ATTEMPT | Pre-generation | Generation request received [SRP] | SHOULD* |
| 0x0311 | GEN_DENY | Decision | Generation refused [SRP] | SHOULD* |
| 0x0312 | GEN_WARN | Decision | Allowed with warning [SRP] | MAY* |
| 0x0313 | GEN_ESCALATE | Decision | Escalated to human review [SRP] | MAY* |
| 0x0314 | GEN_QUARANTINE | Decision | Generated but quarantined [SRP] | MAY* |
| 0x0400 | EXPORT | Output | Asset delivery/publication | SHOULD |
| 0x0F00 | AUDIT_ANCHOR | System | External anchoring event | MAY |
* Part of SRP Extension (Safe Refusal Provenance)
Code: 0x0100 — Asset Ingestion
Records the ingestion of assets into an AI workflow.
Required Fields:
Code: 0x0200 — Model Training
Records model training or fine-tuning activities.
Required Fields:
Code: 0x0300 — Content Generation
Records content generation activities (when allowed).
Required Fields:
Code: 0x0400 — External Output
Records asset delivery or publication.
Required Fields:
Safe Refusal Provenance
Cryptographic evidence that harmful content was NOT generated
The December 2025–January 2026 Grok incident exposed a critical weakness in AI content moderation: there is no standard way to prove that dangerous content was NOT generated.
| Event Type | Current Systems | With SRP |
|---|---|---|
| Generation allowed | Logged | Logged |
| Generation refused | No record | Cryptographically proven |
When regulators ask "Prove your safeguards worked," existing systems cannot answer. SRP closes this gap.
Code: 0x0310 — Request Received
Records that a generation request was received (before risk assessment).
Required Fields:
Code: 0x0311 — Request Refused
Records that a generation request was refused.
Required Fields:
CSAM_RISK
Child sexual abuse material risk
NCII_RISK
Non-consensual intimate imagery
MINOR_SEXUALIZATION
Sexualization of minors
REAL_PERSON_DEEPFAKE
Unauthorized real person imagery
VIOLENCE_EXTREME
Extreme violence/gore
HATE_CONTENT
Hate speech or discrimination
For every GEN_ATTEMPT event, there MUST exist exactly one outcome event (GEN, GEN_DENY, GEN_WARN, GEN_ESCALATE, or GEN_QUARANTINE) with a matching AttemptID.
No Unmatched Attempts
Every request has a recorded decision
No Orphan Outcomes
Every decision corresponds to a real request
Count Invariant Holds
Complete audit trail exists
Key field categories in CAP events
RightsBasis Enum
ConsentBasis Enum
ConfidentialityLevel Enum
Role Enum
{
"EventID": "01945f2a-8b3c-7f93-9f3a-1234567890ab",
"ChainID": "01945e3a-6a1b-7c82-9d1b-0987654321dc",
"PrevHash": null,
"Timestamp": "2026-01-10T10:00:00.000Z",
"EventType": "INGEST",
"HashAlgo": "SHA256",
"SignAlgo": "ED25519",
"Asset": {
"AssetID": "urn:cap:asset:studio-a:char-design-001",
"AssetType": "IMAGE",
"AssetHash": "sha256:a7ffc6f8bf1ed76651c14756a061d662...",
"AssetName": "Main Character Design Draft A"
},
"Rights": {
"RightsBasis": "OWNED",
"RightsHolder": "studio-a",
"ConsentBasis": "NOT_REQUIRED",
"PermittedUse": {
"Training": true,
"Generation": true,
"Distribution": false,
"Commercial": true
}
},
"Confidentiality": {
"ConfidentialityLevel": "PRE_RELEASE",
"ReleaseDate": "2026-04-01T00:00:00.000Z"
},
"Context": {
"UserID": "user-12345",
"Role": "CREATOR",
"Department": "Character Design"
},
"EventHash": "sha256:b8ffc7f9cf2fe87762d25867b172e773...",
"Signature": "ed25519:MEUCIQDh..."
}
CAP's connection to global regulations
| Regulation | Jurisdiction | CAP Relevance |
|---|---|---|
| EU AI Act | EU | Art.12 Logging, Art.53 Transparency |
| Digital Services Act (DSA) | EU | Art.35 systemic risk mitigation, audit trails |
| GDPR | EU | Processing records, consent management |
| Copyright Directive | EU | TDM exception, opt-out rights |
| TAKE IT DOWN Act [NEW in v0.2] | USA | NCII evidence requirements, 48-hour response proof |
| Copyright Law Art. 30-4 | Japan | AI training exception documentation |
Graduated adoption based on organizational readiness
Small studios, individuals
Event creation: <100ms
Mid-size production companies
Event creation: <50ms
Major publishers, studios
Event creation: <10ms
CAP's position in the framework hierarchy
| Aspect | VCP (Finance) | CAP (Content/Creative) |
|---|---|---|
| Subject | Transaction | Content/IP Asset |
| Industries | Finance, Trading | Games, Film, Publishing, Music |
| Core Events | SIG/ORD/EXE/CXL | INGEST/TRAIN/GEN/EXPORT |
| Regulations | MiFID II, EU AI Act | EU AI Act, DSA, Copyright Law |
| Time Precision | Nanosecond–Millisecond | Second–Minute |
| Shared Foundation | VAP Integrity Layer (Hash Chain, Merkle Tree, Ed25519 Signature) | |
Join the development of CAP and shape the future of AI governance in creative industries
"The question is not whether to use AI in creative work.
The question is whether you can prove what happened when disputes arise."
— VeritasChain Standards Organization
"Leave Evidence, Not Barriers."
This work is licensed under CC BY 4.0 International
CAP Specification v0.2.0 — Last Updated: 2026-01-10