Back to home

TRARDI Framework

A public method for AI Systems Assurance.

Four methodologies that govern how we arbitrate AI use cases, audit live systems, verify sovereignty, and assure systems after deployment.

Version 0 — evolving

All four parts are published at version 0. Each part evolves as we engage new systems and retire weak heuristics.

TRARDI Framework: a public method for AI systems assurance in four parts. Arbitration, system audit, sovereignty, post-deployment. Version 0 published 2026.

What this framework is

The TRARDI Framework is the public methodological backbone of TRARDI Group's assurance practice. We publish it because trust in AI systems depends on demonstrable evidence. The framework is organized in four parts. Each answers a distinct question about an AI system: should it be built, is it built correctly, can its sovereignty be verified, does it hold in real conditions after deployment.

Four parts, four questions

Published
Part 01

Arbitration

Question it answers

Should this AI system be built?

A 5-axis weighted scoring framework to arbitrate 10 to 30 AI candidates in 5 days, with a strict kill threshold. Applied at the start of any engagement to avoid funding the wrong use case.

Published
Part 02

AI Systems Audit

Question it answers

Is this system built correctly?

Six-dimension audit of an AI system in operation: context, architecture, model, controls, supervision, and evidence-governance. Produces a findings report with severity-ranked gaps and named risk owners.

Published
Part 03

Sovereign Exposure Review

Question it answers

Can the sovereignty of this system be verified?

Six-dimension assessment of effective sovereignty: hosting, dependency graph, extraterritorial exposure, access control, reversibility, traceability. Measures what holds in practice, not what is declared.

Published
Part 04

Post-Deployment Assurance

Question it answers

Does this system hold in real conditions?

Six-dimension review of a deployed system: usage reality, stability, exceptions and overrides, output quality, control integrity, governance continuity. One-off or quarterly retainer.

Shared conventions across all parts

Scoring scale

Every axis is rated 1 to 5, with 5 as the best outcome. Composite scores roll up to 0 to 5, converted to 0 to 100 percent for external reporting.

Severity levels

Findings are categorized as informational, low, medium, high, or critical. High and critical findings block production sign-off until remediated or explicitly accepted by a named risk owner.

Evidence types

Controls are supported by four evidence types: artefact (a document that exists), measurement (a metric that is captured), review (a human sign-off recorded), and test (an executed verification with a pass or fail).

Named ownership

Every finding and every remediation carries a named owner. An unnamed risk is an unmanaged risk. The framework refuses anonymous responsibility.

Why we publish it

Trust in assurance work depends on the framework being readable by the audited organisation, not only by the auditor. We publish to make every claim we make against a system traceable to a public rubric. Clients keep the framework after the engagement ends. Competitors can adopt it. Regulators can read it. That is the point.

Scope of this framework

The TRARDI Framework is a private methodology made publicly readable. It is not a certification scheme, not an accreditation standard under ISO/IEC 17021, and not a substitute for official conformity assessment by bodies accredited under COFRAC or equivalent European authorities. It is the methodology TRARDI Group uses to deliver its engagements and to structure its opinions.

Want to apply the framework?

Book a 30-minute diagnostic

We will walk you through how the framework would apply to a specific AI system in your portfolio. No pitch.

Book a diagnostic