EU AI ACT · ANNEX III 5(b)
Point-in-time Evidence Packs for high-risk credit AI
When an auditor asks what your model was doing six months ago, Verifacta gives you a provable answer, not a best guess reconstructed from Slack threads and spreadsheets.
Early stage: currently onboarding design partners for pilot implementations.
Why this is useful before your first audit
Built to match how audits are run: scoped to a date range, tied to source records, and reproducible without relying on screenshots or memory.
Reality in most teams
Evidence is scattered across model registries, dataset stores, CI/CD, approvals, and documents owned by different people, with inconsistent linkages.
What breaks in audits
You cannot reliably answer: “Which model was deployed on date X, with which data, approvals, and environment configuration?” without manual reconstruction.
Verifacta's stance
Don't centralize governance “documents”. Centralize the facts (events), then compile the truth at query time.
How it works
Verifacta is not a governance platform. It's a compliance system of record for operational evidence: ingest immutable events, reconstruct past truth, export evidence.
Ingest facts (append-only)
Material governance actions are recorded as events: model version registered, dataset snapshot linked, approval recorded, deployment activated/deactivated.
Compile a snapshot ("as of")
At a selected timestamp, Verifacta deterministically reconstructs the active system state per environment and builds a traceability graph.
Export Evidence Pack
Generate a regulator-readable pack (PDF + JSON): state, linkages, gaps, and an integrity hash so the same query reproduces the same result.
What you get
An Evidence Pack (PDF + JSON) with integrity hash, event lineage, and per-control coverage marked as present, partial, or missing.
Evidence Pack - Snapshot
Included sections
System ID • Scope • Deployments • Approvals • Dataset lineage • Event references • Integrity
Gaps report
Visible export: present / partial / missing evidence per obligation category. Critical gaps require manual sign-off.
What Verifacta does not do
- No monitoring / runtime integration
- No “governance dashboards”
- No legal conclusions or compliance claims
- No blocking exports in v1
One-time integration model
Machine-to-machine ingestion of governance events. Your systems remain the source of artifacts; Verifacta becomes the source of provable history.
Point-in-time by design
Every Evidence Pack is scoped to a specific timestamp and system. The same query always produces the same result.
Frequently asked questions
Common questions from compliance, risk, and AI governance teams.
Is Verifacta already deployed at customer sites?
Not yet. We are currently onboarding design partners for pilot implementations focused on one system and one environment.
Does Verifacta provide legal or compliance verdicts?
No. Verifacta provides evidence reconstruction and packaging. Compliance interpretation remains with your risk, compliance, and legal functions.
What events do we need to send?
The core set covers model version registrations, dataset snapshot links, approval records, and deployment activations or deactivations. During the pilot we scope this together, most teams already produce these events, they just aren't captured in one place.
How is this different from a model registry?
A model registry tracks model artefacts. Verifacta tracks the governance history around them: approvals, deployments, dataset links, and the point-in-time state you'd need to show an auditor. It is designed to sit alongside your registry, not replace it.
Does this replace our existing documentation?
No. Your existing documents, policies, and model cards stay where they are. Verifacta records the governance events that point to those artefacts, so you can reconstruct a coherent, time-stamped view of what existed and what was in force at any given moment.
Looking for design partners
If you are preparing high-risk credit AI controls, we can run a scoped pilot and produce one Evidence Pack from your operational events.