TRARDI Framework · Part 02

AI Systems Audit

Is this system built correctly?

AI Systems Audit: six-dimension review of a live AI system covering context, architecture, model, controls, supervision, and governance. Severity-ranked findings with named owners. Version 0, 2026.

What an AI Systems Audit covers

An AI system is more than a model. An AI Systems Audit assesses the system as a system: the architecture around the model, the data it depends on, the controls that constrain it, the supervision that catches its errors, the journalization that makes it reviewable, and the governance that owns it. The output is a findings report with severity-ranked gaps and named risk owners for every high or critical finding.

The six audit dimensions

Each dimension is rated 1 to 5 on the framework scoring scale. Composite scores roll up to 0 to 5 for the system.

01

Context & classification

What the system is for, which risk class applies under the AI Act, which regulatory obligations are triggered.

Key checks

  • Intended purpose documented and signed
  • AI Act risk class determined (minimal, limited, high-risk, or prohibited)
  • Regulatory obligations mapped to operational controls
  • Affected parties identified (users, subjects, third parties)
02

Architecture & data

System design, deployment topology, data flows, dependency graph, integration boundaries.

Key checks

  • Architecture diagram current and matches production
  • Data lineage traceable from source to output
  • Third-party dependencies listed with update cadence
  • Integration points documented with failure modes
03

Model & pipeline

Training data, validation regime, deployment pipeline, versioning, rollback procedure.

Key checks

  • Training data sources identified and license-cleared
  • Validation protocol reproducible
  • Model versions tagged with deployment dates
  • Rollback path tested within the last quarter
04

Controls

Access control, input validation, output filtering, rate limiting, safety mechanisms, policy enforcement.

Key checks

  • Access control enforced at API and infrastructure level
  • Input validation rules documented and tested
  • Output filtering prevents policy violations
  • Rate limiting configured against denial-of-service
05

Supervision & human oversight

Human-in-the-loop checkpoints, escalation procedures, override paths, oversight cadence.

Key checks

  • Human checkpoints defined for high-risk decisions
  • Escalation rules trigger automatically on defined conditions
  • Override actions logged with operator identity and reason
  • Supervision team staffing sufficient for decision volume
06

Evidence & governance

Audit trail, documentation maturity, incident response, named ownership, review cadence.

Key checks

  • Audit trail complete, immutable, retained per policy
  • Operating documentation current within the last quarter
  • Incident response procedure defined, tested, and named
  • System owner named with decision authority documented

Auto-block rules

Four conditions trigger automatic production sign-off block, regardless of composite score.

No named system owner

Block

An unowned system cannot be governed.

No audit trail

Block

A system without logs cannot be audited after the fact.

No incident response procedure

Block

An incident without a procedure becomes an improvisation.

Output policy filters untested

Block

A system that can violate policy in production cannot ship.

What you receive

An AI Systems Audit produces six concrete deliverables, handed over at the end of the engagement.

  • 01

    Audit scope memo with signed boundaries

  • 02

    Evidence pack organized per dimension

  • 03

    Findings report with severity-ranked gaps

  • 04

    Remediation roadmap with named owners and deadlines

  • 05

    Executive debrief (60 minutes, recorded)

  • 06

    Re-test plan for high and critical findings

Typical duration

Two to four weeks depending on system scope. Week 1: scope and evidence collection. Week 2: dimension review and scoring. Week 3 to 4: findings report, debrief, remediation roadmap.

Scope of this method

This method produces a private audit report for the client organization. It is not a certification under ISO/IEC 17021, not an AI Act conformity assessment, and not a substitute for official regulatory audit by accredited bodies. It is the discipline TRARDI applies to structure its assurance opinions.

Want an audit of one of your AI systems?

Book a 30-minute diagnostic

We will scope the audit to a specific system in your portfolio and walk you through which dimensions would apply first. No pitch.

Book a diagnostic