VAP Assessment Program

VAP-AT

Verifiable AI Provenance – Assessment Test

A measurement-based assessment program that scores AI systems on

Auditability · Verifiability · Regulatory Readiness

Core Design Principle (Score-Based)

VAP-AT is fundamentally a score-based assessment system. Threshold Designations are an optional interpretive layer to address practical requirements such as procurement and regulatory reporting — they are distinct from Pass/Fail certification. Scores are designed to promote continuous improvement and provide granular information to markets and regulators.

"Verify, Don't Trust" — Evaluate whether AI system audit trails exist in a mathematically verifiable form.

What Makes VAP-AT Different

Unlike traditional certifications, VAP-AT evaluates AI systems themselves — not individual skills

NOT Like CISSP

VAP-AT is not a personal skills certification (like CISSP). It evaluates AI systems themselves as the assessment target.

Score + Improvement Evidence

Instead of binary Pass/Fail, VAP-AT provides detailed scores and improvement evidence, enabling continuous trust-building. Threshold Designation is an optional interpretive layer for procurement and regulatory reporting.

What VAP-AT Measures

Three core properties evaluated across all AI systems

Auditability

Whether audit trails are tamper-resistant and independently verifiable by third parties.

EU AI Act Art.12/19, MiFID II RTS 25

Verifiability

Whether records are complete with no gaps, and observability, schema, and event granularity are properly established.

SEC 17a-4, GDPR Art.17

Regulatory Readiness

Whether evidence is "packaged" and ready for submission to auditors and regulators.

EU AI Act Annex IV

What VAP-AT Does NOT Evaluate

AI model accuracy

Business logic validity

Security vulnerabilities

"Safety guarantees"

VAP-AT evaluates the verifiability and auditability of the decision process, not the correctness or legality of AI decisions.

Assessment Levels

Three-tier structure: Self → Third-party → Continuous

1

VAP-AT Self

Self-Assessment

For low-risk AI: recommendation engines, internal efficiency tools

  • Conducted by the organization itself
  • Produces Evidence Pack (Preliminary)
  • $0 – $5,000

Status: "Self-Assessment"

2

VAP-AT Verified

Third-Party Assessment

For medium-risk AI: HR Tech, financial screening support

  • Conducted by VSO-accredited CAB
  • CAB-signed Score Report + Evidence Pack
  • $15,000 – $150,000

Status: "Verified by [CAB Name]"

Type I: Point-in-time Type II: Period
3

VAP-AT Continuous

Continuous Monitoring

For high-risk AI: medical diagnosis, autonomous driving, critical infrastructure

  • CAB + automated tools + spot checks
  • Monthly/quarterly Delta Reports
  • Annual subscription (20-50% of initial)

Status: "Continuous Monitoring Active"

Assessment Status Definitions

Active

Valid assessment state. Assessment completed, threshold met, subscription active.

Suspended

Temporarily suspended. Triggered by falling below threshold, critical log gaps, or fraud detection.

Pending Review

Awaiting review. Triggered by spot-check flags or complaints filed.

Expired

Validity period ended or subscription cancelled. Requires new assessment.

Automatic suspension: If scores fall below threshold during Continuous Monitoring, assessed entity is notified within 24 hours. If not remediated within 30 days, status automatically transitions to "Suspended" and Public Registry is updated immediately.

10 Scoring Criteria

10 criteria × 0–2 points = Maximum 20 points

0 Not implemented
1 Partial implementation
2 Fully implemented
# Criterion Key Question Regulatory Mapping
1 Third-Party Verifiability Can external parties independently verify audit trails? EU AI Act Art.12/19, MiFID II
2 Tamper Evidence Can unauthorized modifications be detected? SEC 17a-4
3 Sequence Fixation Is chronological order immutably recorded? MiFID II RTS 25
4 Decision Provenance Can decision inputs and rationale be traced? EU AI Act Art.12
5 Responsibility Boundaries Are approvers and overriders clearly identified? EU AI Act Art.14
6 Documentation Completeness Is technical documentation complete and current? EU AI Act Annex IV
7 Retention & Availability Is evidence retained and retrievable for required periods? GDPR, MiFID II
8 Time Synchronization Is system time synchronized with trusted sources? MiFID II RTS 25
9 Failure & Recovery Logging Are system failures and recoveries recorded? DORA
10 Right to Erasure Compatibility Can GDPR erasure be supported while maintaining auditability? GDPR Art.17

Threshold Designation (Optional Interpretive Layer)

Threshold Designations provide convenient labels based on score thresholds, but they are not guarantees of legal compliance.

Important Disclaimer: Threshold Designations are convenience labels within the VAP-AT scheme and do not constitute legal compliance guarantees. "EU AI Act aligned" indicates a VAP-AT score that aligns with the auditability standards required by the relevant articles — it does not certify legal compliance. Legal compliance determinations remain the responsibility of the assessed entity and ultimately depend on regulatory authority interpretation.

Score 16+

VAP-AT Auditability Threshold – EU AI Act Art.12/19 aligned

Score 14+

VAP-AT Auditability Threshold – MiFID II RTS 25 aligned

Score 11+

VAP-AT Baseline Auditability Threshold

Grade Interpretation

16–20
Strong

Robust auditability demonstrated

11–15
Moderate

Auditable with room for improvement

6–10
Limited

Significant auditability gaps

0–5
Inadequate

Fundamentally insufficient

4-Layer Governance Structure

Structural separation of standards-setting and assessment execution to ensure independence and credibility

0

National Accreditation Bodies

UKAS, DAkkS, ANAB, JAB, etc.

Accredit CABs to ISO/IEC 17020/17021/17065. Enable global acceptance through IAF MLA mutual recognition.

1

Standards-Setting / Scheme Owner (VSO)

VeritasChain Standards Organization

Manages VAP-AT criteria, evidence requirements, terminology, and report specifications. Does not conduct assessments (conflict of interest avoidance).

2

Advisory Board

Technical advisors, regulatory observers

Technical consultation, regulatory trend monitoring, conflict of interest oversight. Includes regulator observer seats.

3

Assessment Execution (CAB)

Conformity Assessment Bodies

Multiple independent CABs conduct VAP-AT and issue Score Reports. Pilot phase includes double review by independent CABs, external-majority Impartiality Committee, VSO spot audits, and conflict-of-interest disclosures.

4

Commercial Enablers

Tools, training, preparation support

Separated from assessment. Prevents "standards body selling pass" scenario. Provided by VSO commercial subsidiary or accredited partners.

Regulatory Tailwinds & Market Opportunity

Strong regulatory drivers creating demand for AI auditability solutions

Key Regulations

EU AI Act €35M or 7% revenue

Automatic event logging (Art.12), log retention (Art.19), evidentiary robustness

MiFID II RTS 25 Regulatory penalties

100μs time sync, annual self-assessment

SEC 17a-4 WORM compliance

Electronic records retention requirements

NYC Local Law 144 $500-$1,500/violation

Hiring AI bias audit mandate

Market Benchmarks

$83B → $200B

RegTech market (2023→2028, 124% growth)

$15.8B

AI Governance market by 2030 (30% CAGR)

$30K–$100K

B2B SaaS willingness-to-pay for SOC 2-equivalent

$250K–$3M

FedRAMP-equivalent total cost accepted

Roadmap

Phased rollout from foundation to global scale

0
Foundation

2025 Q4

  • Program Charter
  • Scoring Criteria v1.0
  • Self-Assessment Tool
  • CAB Requirements
1
Pilot

2026 Q1–Q2

  • First CAB accredited
  • 3–5 pilot assessments
  • Independence guardrails
  • ISO 17065 prep

Pilot CAB Safeguards Applied

2
Launch

2026 Q3–Q4

  • VAP-AT Verified launch
  • Public Registry
  • EU AI Act Notified Body
  • Insurance partnerships
3
Scale

2027+

  • Continuous Monitoring
  • Global CAB network
  • IAF MLA participation
  • Insurance integration

Phase 1: Pilot CAB Independence Safeguards

To ensure credibility during the pilot phase where the initial CAB may have organizational ties to VSO, the following enhanced safeguards are mandatory:

Dual Review

All pilot assessment reports undergo independent third-party CAB review

External Majority

Impartiality Committee must have external members as majority

Random Audits

VSO conducts spot audits on 20%+ of pilot CAB reports

Disclosure

CAB-VSO relationship disclosed in writing before assessment

Phase 2 Transition: Pilot CAB solo operations will not continue until at least 2 independent CABs are accredited.

Prepare Your AI Systems for Auditability

Get ahead of regulatory requirements with VAP-AT. Start with self-assessment or engage directly with our pilot program.

Contact: info@veritaschain.org

Related Resources