Aligned with VeritasChain Protocol (VCP)
Measure Algorithmic Trading Transparency with a Common Score
A vendor-neutral, standard-aligned benchmark for evaluating auditability
— usable with or without VCP implementations.
The current state of algorithmic trading auditability
AI-driven trading decisions are opaque. When regulators ask "why?", there's no auditable answer.
Logs are recorded, but authenticity and sequence cannot be proven. Timestamps can be disputed.
Audits fail at the final stage: evidence quality. Manual gathering takes days, formats are inconsistent.
A local-only, audit-safe reference implementation for running the AI Decision Auditability Benchmark and exporting regulator-ready evidence.
VAP Scorecard Explorer is a reference implementation of the AI Decision Auditability Benchmark (10 criteria / 20 points).
Allows audit and assurance teams to:
Privacy & Security
All processing runs locally. No network communication. No external APIs. No analytics.
Open Scorecard Explorer
(Local-Only)
Benchmark specification and scoring criteria are published openly as the canonical reference.
A diagnostic score, not an implementation proposal
This is not a technology adoption proposal.
This benchmark enables organizations to diagnose their auditability using an industry-standard measure. Results directly serve as evidence quality for external audits and regulatory compliance.
Note: This benchmark does not provide certification or endorsement. It offers an independent, evidence-based assessment framework.
Ordered by audit relevance. Evidence-centric criteria first, technical implementation details later.
第三者検証可能性
"Can an external party independently verify the audit trail?"
改ざん検知
"Can unauthorized modifications be detected?"
順序の固定
"Is Decision → Order → Execution order immutable?"
判断由来
"Can inputs, conditions, and rationale be traced?"
責任境界
"Who approved, modified, or overrode each action?"
監査提出性
"Can evidence be exported for regulatory review?"
保持期間・耐久運用
"Are records retained for required periods (e.g., 7 years)?"
時刻の信頼性
"Are timestamps synchronized to a trusted source?"
暗号強度
"Do algorithms meet current security standards?"
暗号移行性(PQC準備)
"Can the system migrate to new algorithms?"
Minimum viable test procedure for all 10 criteria
Export sample audit log (10-100 records). Give it to someone unfamiliar with your system.
Rule: No phone calls, no vendor support, no internal tools allowed.
Modify one field in one historical record. Run integrity check.
Pass: Automatic detection with alert; modification location identified.
Find a Decision → Order → Execution chain. Verify cryptographic binding.
Test: Try to insert a backdated event. If possible, score 0.
Pick a random decision from last week. Reconstruct: inputs, parameters, logic, approver.
Target: Full context retrievable in <10 min = Score 2.
Simulate: "Regulator requests all activity for Account X, Date Y."
Target: One-click export; complete package in <5 minutes = Score 2.
Review retention policy, time source, cryptographic algorithms, migration plan.
Covers: Criteria #7-10 (Retention, Timestamp, Crypto Strength, Agility)
Third-party submission template for audit and regulatory review
Alignment with Regulation (EU) 2024/1689 for high-risk AI systems
| EU AI Act Article | Requirement | Benchmark Coverage |
|---|---|---|
| Article 12 | Record-keeping / Logging | ✓ Direct Criteria 1-7 |
| Article 13 | Transparency | ◐ Partial Criteria 4, 5 |
| Article 14 | Human Oversight | ◐ Partial Criterion 5 |
| Article 17 | Quality Management | ✓ Supported Criteria 6, 7 |
MiFID II / RTS 25 Synergy: Criterion #8 (Timestamp Reliability) also addresses RTS 25 clock synchronization requirements (±100μs for HFT, ±1ms for others).
Industry stakeholders who benefit from standardized auditability measurement
Set a common baseline for audit engagements. Compare systems objectively.
Demonstrate your product's auditability with quantifiable metrics.
Turn transparency into a competitive advantage. Speed up audit submissions.
Interpretation guide for assessment results
Ready for external audit and regulatory review. Continue maintaining best practices.
Address gaps in 0-score areas before external audit. Focus on quick wins first.
Significant improvements needed. Prioritize evidence-centric criteria #1-6.
Fundamental gaps require immediate attention. Consider system redesign.
All benchmark documents and resources
VSO-SCORE-001
10 criteria, scoring rubric, self-assessment sheet
VSO-SCORE-002
Step-by-step test procedures (~3 hours total)
Submission Template
Third-party submission template with attestation
VSO-SCORE-004
Regulatory mapping to Articles 12, 13, 14, 17
Frequently asked questions about the benchmark
No. This benchmark is a measurement tool for auditability, usable regardless of technology choice. However, achieving scores close to 20 typically requires cryptographic integrity mechanisms—which VCP provides as one option.
Yes. The benchmark is licensed under CC BY 4.0. Audit firms can use it for client engagements with attribution. The Evidence Pack provides a standardized submission format.
Not necessarily. The benchmark is designed for internal self-assessment. For third-party submissions, the Evidence Pack uses SHA-256 hashes to prove file integrity without exposing actual content. You control what gets shared.
Use the Evidence Pack template: overall score, 10-criteria breakdown, Evidence Index (filename + SHA-256 hash), and assessor attestation. The hash-based index proves evidence authenticity without requiring full data disclosure.
16-20 points indicates strong auditability and readiness for external audit. 11-15 is moderate—address 0-score items first. Below 10 requires significant improvement before regulatory engagement.
This benchmark is for self-assessment and third-party evaluation. For formal certification, see the VC-Certified program which uses VCP compliance as its basis.
Published by VeritasChain Standards Organization (VSO)
as part of the VCP standards ecosystem.