Insights
Against AI Theatre
Why declared responsibility, paper governance, and marketing sovereignty collapse the moment a real system meets a real audit.
AI theatre: governance on paper, declarative compliance without evidence, sovereignty marketing without measurable control. The antidote is provability discipline applied to every claim.
Enterprise AI has moved through three questions. First: can it work. Second: can we ship it. The third, beginning now: can we defend what we built. Most organisations are unprepared for the third. They have accumulated systems, frameworks, declarations, and governance decks. They have not accumulated evidence. When an incident lands, a regulator knocks, or a board member asks a precise question, the gap between the posture and the operational reality opens. I call this gap AI theatre. This essay describes its four shapes, the four pressures that feed it, and what it takes to operate outside of it.
What AI theatre looks like
AI theatre takes four recognisable shapes. Each is built for presentation. Each collapses under serious scrutiny. The appearance of compliance replaces its operational reality.
The four shapes
Paper governance
A committee meets, minutes circulate, a policy gets signed. No connection to the systems running in production. No connection to what gets scored, retrained, or escalated. The governance lives on paper. It never enters an engineering workflow.
Declarative compliance
A company declares itself compliant with a framework it has never had audited. The claim is on a slide, sometimes on a website. It does not survive the first structured review by a competent third party.
Marketing sovereignty
The AI stack is declared sovereign because it runs in a European data centre. No one has audited the supply chain, the dependency graph, the egress paths, or the conditions under which substitutability holds. The claim is true at a shallow depth and false at any depth that matters.
Responsible-AI rituals
A set of principles gets posted: fairness, transparency, human oversight. No measurement regime sits under any of them. Nothing changes in the day-to-day engineering when the principle is challenged. The ritual functions as moral cover. Nothing underneath it behaves like a control.
Why it flourishes
Four structural pressures explain why AI theatre persists. None is moral failure. First: the regulatory environment moves faster than operational capacity. AI Act general-purpose obligations apply from August 2026 and high-risk obligations from August 2027. ISO/IEC 42001 published December 2023 with adoption still early. NIS2 transposition deadline passed October 2024. DORA entered force January 2025. Each framework creates a new reporting surface. Organisations do what they always do under time pressure: produce artefacts that look compliant first, worry about operational reality second. Second: boards ask simple questions that do not map to simple answers. “Are we responsible in AI?” is not an answerable question. It invites a performative response, which the organisation then produces, documents, and presents. Third: vendors sell frameworks, and frameworks are easier to buy than to implement. A framework installed without the operational discipline to use it becomes another piece of the theatre. Fourth: the absence of an independent European assurance market means no signal separates real posture from performed posture. No one publishes a red mark next to the company that declared compliance it could not defend. The performance has no cost.
The alternative
The alternative is discipline applied to every claim. Every claim traceable to an artefact. Every posture testable. Every principle mapped to an operational control. Each truth below is uncomfortable. The theatrical version is comfortable. That gap is the signal. Provability demands artefacts. Judging the system means looking beyond the model: the pipeline, the dependencies, the supervision, the incident handling, the data lineage. Verifying sovereignty means tracing a supply chain. Marketing claims do not count as verification. Generating evidence for a control changes how engineering operates; governance communication alone produces no evidence. None of this is new in regulated industries. Financial institutions have operated under this discipline for decades. Pharmaceutical manufacturers have. Nuclear operators have. The question now: whether AI systems will be held to the same standard, and which actors will be ready to deliver that standard at operational depth.
Four operating truths
- 01
AI becomes credible when it is provable, not when it is declared responsible.
- 02
An AI system is judged as a system, not as a model.
- 03
Sovereignty is verified, not declared.
- 04
A control without evidence does not exist in assurance.
What we refuse
TRARDI Group is building in this space. The refusals matter as much as the ambitions. Each refusal is a commitment with an operational consequence.
The refusals
To become a generalist AI consultancy
The category already exists. It absorbs every new actor into delivery theatre. We are building a different house: three distinct disciplines — assurance, build, adoption — each with its own doctrine, each held to the same operational standard.
To present ourselves as an official certifier
Certification is issued by bodies accredited under COFRAC and equivalent European schemes. We do not hold that accreditation today, and we will not imply that we do. What we offer is readiness, gap analysis, and methodology alignment, carried out by practitioners, with the explicit scope of our work stated on every engagement.
To reduce compliance to documentary work
A compliance posture that only produces documents does not survive an incident. We operate from the other direction: controls first, evidence generation built into the pipeline, documentation as an output, not as the point.
To ship systems we cannot explain
If a design cannot be described to the people who will depend on it, we do not consider it deployable. Explainability is a precondition of shipping, not a layer added afterwards.
The contract
This essay is not a pitch. It is a contract. What we commit to is the opposite of theatre: engagements in which every claim can be traced to an artefact, every posture to a test, and every principle to an operational control. We commit to this in our build work, in our adoption programs, and — as it is formalised — in the independent assurance practice we are establishing. If this framing describes a standard your organisation wants to operate by, the conversation we can have is direct and short. If it describes a standard you want to impose on your AI suppliers, we can help you design the review regime that enforces it. What we cannot do is help continue the theatre. That is what this essay is for.
Signed
Dr. Youssef TRARDI — Founder, TRARDI Group
Book a 30-minute diagnostic
We will spend 30 minutes mapping where your AI posture is provable and where it is not. No pitch.
Book a diagnostic