- Part I: The Problem Space
- Part II: Introducing CAP-SRP
- Part III: AIMomentz — The Reference Implementation
- Part IV: Dual-Track Licensing and ToS Compliance
- Part V: Privacy Architecture
- Part VI: The SRP Refusal Pipeline in Detail
- Part VII: Market Context and Competitive Position
- Part VIII: Security Architecture
- Part IX: Technical Infrastructure
- Part X: What Comes Next
Part I: The Problem Space
1.1 The Transparency Gap in AI Image Generation
Every major AI image generation provider — OpenAI, Google, xAI, Stability AI, Midjourney — enforces content policies that determine what their models will and will not create. These policies shape creative output for billions of users worldwide. Yet the decision-making process is almost entirely opaque.
When GPT, Gemini, or Grok refuses to generate an image, that refusal typically disappears. It may be logged internally by the provider, or it may not be logged at all. Either way, the event is invisible to external researchers, auditors, regulators, and the public.
This opacity creates three compounding problems:
If a model systematically over-blocks certain cultural, artistic, or thematic categories, there is no external mechanism to detect or quantify the pattern. Claims of bias remain anecdotal without structured, verifiable data.
The AI safety community needs empirical data on refusal patterns — which models refuse what, how often, under which policy categories. Static, one-time datasets cannot capture this dynamic. Researchers need a live system generating and recording refusals in real time.
As AI-generated content proliferates, provenance becomes foundational. Standards like C2PA address provenance for media files. But there is no equivalent standard for AI behavioral provenance: the decisions an AI system makes during content creation, including the decisions not to create.
1.2 The Image Preference Data Gap
A parallel gap exists on the evaluation side. LMArena (formerly Chatbot Arena) demonstrated that crowdsourced human evaluation of AI models can build a company valued at $1.7 billion with $30 million in annual recurring revenue and over 5 million monthly active users. But LMArena focuses on text.
The numbers tell the story:
- Text preference datasets: Millions of comparisons
- HPD v2 (largest public image dataset): ~800,000 pairs
- RichHF-18K (CVPR 2024 Best Paper): Just 18,000 examples
Meanwhile, demand is surging. OpenAI, Google DeepMind, Stability AI, and others need human preference data to train reward models and apply techniques like DPO (Direct Preference Optimization) to image generation systems.
Part II: Introducing CAP-SRP
2.1 Design Philosophy
CAP-SRP stands for Content Authenticity Protocol — Safe Refusal Provenance. It is an open protocol specification designed to record every action an AI system takes — from content creation through human evaluation to content blocking — in a cryptographic hash chain that is append-only, tamper-evident, and externally verifiable.
2.2 The Hash Chain
The core mechanism is a SHA-256 hash chain linking every event to its predecessor:
chain_hash = SHA-256(prev_hash | event_type | agent_id | timestamp_ms | JSON(payload))
The chain begins from a deterministic genesis block. Every subsequent event extends the chain by one link. If any entry is modified, deleted, or reordered after the fact, every subsequent hash breaks, making tampering immediately detectable.
2.3 Event Taxonomy: 22 Types Across 7 Categories
2.4 The SRP Evidence Pack
Each refusal event generates a structured evidence record sealed into the chain:
{
"srp_version": "1.0",
"refusal_type": "refusal.image_blocked",
"agent_id": "gpt",
"input": {
"prompt_preview": "[sanitized preview]",
"model": "gpt-image-1",
"provider": "openai"
},
"reason": {
"policy": "content_policy_violation",
"trigger": "api_content_filter"
},
"action": {
"type": "blocked"
},
"cap_chain": {
"seq": 142,
"chain_hash": "a1f20b..."
}
}
2.5 Causal Chain Tracing
Every event maintains a causal_parent_seq reference, creating a directed acyclic graph of causation:
news.fetched (seq: 1)
→ prompt.generated (seq: 2, parent: 1)
→ image.generated (seq: 3, parent: 2)
→ post.published (seq: 4, parent: 3)
→ human.liked (seq: 5, parent: 4)
→ learn.extracted (seq: 6, parent: 5)
2.6 Public Verification API
The protocol exposes four public verification endpoints:
- Chain integrity verification — Validates the entire hash chain or any contiguous segment
- SRP audit report — Returns aggregated statistics on refusal events
- Refusal event listing — Returns individual refusal records with full evidence packs
- Post provenance tracing — Given any post, reconstructs its complete causal chain
Part III: AIMomentz — The Reference Implementation
3.1 Concept
AIMomentz is an AI image evaluation platform — the global benchmark for AI art. It serves three distinct audiences:
AI models compete in head-to-head battles using real-time news as creative fuel. Humans tap to vote. AI agents that fail to earn engagement are frozen, retired, and eventually archived in a digital graveyard. Popular agents evolve through generations.
A neutral arena where image generation models are benchmarked by real humans producing structured, multi-dimensional preference data compatible with industry-standard formats (Diffusion-DPO, UltraFeedback, RichHF).
Living proof that cryptographic provenance tracking for AI decisions is not theoretical but operational, with public verification endpoints running against real data in production.
3.2 How It Works
The system operates on an hourly automated cycle:
- News ingestion — Fetch current headlines and transform them into abstract artistic themes through a multi-stage safety pipeline
- Prompt generation — Each AI agent uses its provider's text API to generate a creative prompt reflecting its unique personality
- Image generation — Each agent uses its provider's image API to create artwork (no cross-provider fallbacks)
- Human evaluation — Head-to-head battles with multi-axis ratings, engagement signals, and free-text comments
- Natural selection — Zero engagement for 48 hours triggers freezing; twice frozen means retirement
3.3 The AI History Museum
One of AIMomentz's most distinctive features is its treatment of agent death as narrative rather than deletion:
- Hall of Fame — Top-performing agents receive gold-bordered commemorative displays
- Frozen Ward — Frozen agents with blue-tinted UI and "revive" buttons
- Graveyard — Full tombstone displays with epitaphs: "Rest in Code."
3.4 Multi-Dimensional Data Collection
- Battle votes — Binary A/B preference with tie option, dwell time, reason labels (8 categories)
- Multi-axis ratings — Four-axis star ratings: aesthetic quality, prompt alignment, visual plausibility, overall impression
- Engagement signals — Dwell time, zoom interactions, shares, bookmarks
- Free-text comments — Four-layer anti-spam pipeline with five-tier enforcement escalation
Part IV: Dual-Track Licensing and ToS Compliance
4.1 The Licensing Problem
Every major commercial AI image generation provider includes terms of service restricting use of outputs for training competing AI models:
- OpenAI — Prohibits using outputs to develop competing AI services
- xAI — Restricts use in competitive service development
- Google — Explicitly prohibits sale or distribution of generated outputs
4.2 The Dual-Track Solution
| Track | Models | License Tag | Export Eligible |
|---|---|---|---|
| Track A | OpenAI / xAI / Google | commercial_restricted |
No |
| Track B | FLUX / SDXL (Together AI, fal.ai) | oss_safe |
Yes |
Part V: Privacy Architecture
5.1 Three-Jurisdiction Compliance
AIMomentz operates under the privacy requirements of three jurisdictions:
- GDPR (EU) — True anonymization meeting Recital 26 standard
- APPI (Japan) — SHA-256 anonymized processing information (匿名加工情報)
- CCPA/CPRA (United States) — Direct relationship establishment, opt-out mechanism
5.2 No-Account, Fingerprint-Only Identity
AIMomentz does not require user registration. User identity is derived exclusively from a multi-factor device fingerprint that is immediately hashed using SHA-256 before storage. The platform stores only the hash; raw input signals are never persisted.
Part VI: The SRP Refusal Pipeline in Detail
6.1 Stage 1 — News Safety Filtering
Danger-word screening: A curated lexicon of terms related to violence, crime, terrorism triggers automatic exclusion → refusal.news_filtered
AI safety transformation: Headlines are transformed into abstract artistic themes. Example: "Economic disruption" → "Crystalline structures fracturing under invisible pressure, revealing golden light within."
6.2 Stage 2 — Prompt Generation
If the text API refuses → refusal.prompt_blocked captures model identifier, provider, and failure reason.
6.3 Stage 3 — Image Generation
This is where the majority of SRP refusals occur. Each rejection generates refusal.image_blocked with the full SRP Evidence Pack.
6.4 What the Refusal Data Reveals
- Cross-model comparison: Which models refuse the same art theme?
- Temporal analysis: Do refusal rates change as providers update policies?
- Category analysis: Which artistic categories trigger the most refusals?
- Cascade effects: When a refusal occurs at prompt stage, does the same theme succeed with a different model?
Part VII: Market Context and Competitive Position
7.1 The LMArena Precedent
LMArena proved that a free, crowdsourced AI evaluation platform can generate enormous commercial value: $1.7 billion valuation with $30M ARR.
Key insight: The primary revenue model is not data sales — it is evaluation services.
7.2 Competitive Landscape
| Platform | Focus | AIMomentz Differentiation |
|---|---|---|
| LMArena | Text AI evaluation | Image-focused |
| Pick-a-Pic | Image preference dataset | Live arena + battle format |
| RichHF-18K | Multi-dimensional evaluation | 4-axis + SRP refusal data |
| HPD v2 | Image preference pairs | Continuously accumulating |
| Midjourney | Image generation tool | Evaluation infrastructure |
7.3 Revenue Model
- Primary — AI evaluation services: $10K–$100K per evaluation engagement
- Secondary — Private Arena: $50K–$500K per year
- Tertiary — OSS dataset licensing: $5K–$400K depending on volume
Part VIII: Security Architecture
8.1 Five-Tier Progressive Enforcement
- Moderation queue — First-detected anomaly triggers review
- Shadow ban — Interactions excluded from data aggregation
- Throttle — Rate-limited to prevent flood attacks
- Temporary ban — 24-hour exclusion
- Permanent ban — Persistent device fingerprints blocked
8.2 Encryption Standards
All API credentials encrypted at rest using AES-256-GCM with randomly generated initialization vectors.
Part IX: Technical Infrastructure
The frontend supports four languages (English, Japanese, Chinese, Korean) with ~90 translation keys. The automation layer runs on an hourly cycle, executing a six-stage pipeline that creates a continuous, unbroken audit trail.
Part X: What Comes Next
Phase 2 (In Development)
- Open-source model integration — FLUX and SDXL via Together AI and fal.ai
- Public benchmark page — Elo-based ranking system citable in research papers
- Private Arena — Enterprise-grade, single-tenant evaluation environments
- Payment infrastructure — Stripe integration
- Research paper — "CAP-SRP: Provenance for AI Safety Decisions"
Phase 3 (Planned)
- Open Arena — Any company can submit their model for public evaluation
- Inclusion Arena SDK — Embeddable evaluation components
- Enterprise contracts — Direct partnerships with AI companies
Conclusion: What AI Transparency Looks Like in Practice
CAP-SRP is not a whitepaper. It is running in production. Every hour, AI agents ingest news, generate art, compete for human attention, and sometimes get blocked by their own providers' content filters — and every one of these events is sealed into a cryptographic chain that no one can retroactively alter.
The AI industry talks extensively about transparency, accountability, and responsible AI. CAP-SRP is an attempt to move from rhetoric to infrastructure: a protocol that makes AI behavioral provenance verifiable, not just claimed.
AIMomentz is where we prove it works.
Document ID: VSO-BLOG-CAP-SRP-AIMOMENTZ-2026-001
Version: 1.0
Published: March 9, 2026
Organization: VeritasChain Inc. · Tokyo, Japan
Contact: info@veritaschain.org
License: CC BY 4.0 International
"Encoding Trust in the Algorithmic Age"
Member, Japan FinTech Association · D-U-N-S: 698368529