← Back to Insights
·6 min read·By Dr. Youssef TRARDI

Scoring AI use cases: the TRARDI Impact Score framework

A 5-axis scoring framework to arbitrate 10–30 AI candidates in 5 days — with a strict kill threshold that most teams skip.

TRARDI Impact Score ranks AI use cases on 5 weighted axes (Business Impact 30%, Data Readiness 20%, Feasibility 20%, Adoption Risk 15%, Governance 15%). Scores below 5.0 are killed, no exceptions.

Gartner reports 87% of AI projects never reach production. MIT Sloan found only 11% of companies see meaningful financial impact from AI. The failure almost never traces to the technology. It traces to the use case — picked by loudest sponsor, funded on feeling, exposed as wrong in month three. The TRARDI Impact Score prevents that waste. Every candidate scores on five weighted axes. A 0–10 composite decides the fate. Above 7.5 green-lights Phase 2. Below 5.0 kills or parks, no exceptions. Between, the scope gets reworked until the weakest axis catches up. The math takes a week to learn. The conversation it forces is the actual prioritization work.

Why gut-feel prioritization fails

Every organization we audit starts with 10 to 30 AI ideas on a backlog. Each has an executive sponsor, a business case, a date that already slipped. Without a scoring discipline, prioritization collapses into who argues loudest in the meeting. A 2024 Accenture survey of 2,000 C-suite executives found 34% of stalled AI programs had been prioritized by executive preference, not structured analysis. The TRARDI Impact Score replaces the argument with a grid. Teams still debate — they debate axis ratings, not political capital. That shift turns a 6-hour prioritization meeting into a 90-minute session with a defensible output.

The 5 axes

Each axis has a 10-page internal rubric defining what a 5 or an 8 means in concrete terms. Teams calibrate in 2 hours before scoring begins.

AxisWeightWhat it measures
Business Impact (BI)30%Revenue gain, cost avoided, risk reduced — quantified in euros, per year
Data Readiness (DR)20%Volume, quality, lineage, access rights of training and inference data
Feasibility (F)20%Model state-of-art maturity, buildable architecture within 3 months, no research unknowns
Adoption Risk (AR)15%Change management load, user incentive alignment, workflow fit
Governance Complexity (GC)15%Regulatory scope, audit trail, data residency, explainability requirements

The formula and thresholds

Composite score

Score = (BI × 0.30) + (DR × 0.20) + (F × 0.20) + (AR × 0.15) + (GC × 0.15)

Score rangeActionTypical scope
≥ 7.5Green-light Phase 2Prototype build, 3–6 weeks, fixed-price deliverable
5.0 – 7.4ReworkReduce scope, improve data, re-align stakeholders, re-score in 4 weeks
< 5.0Kill or parkNo exceptions. Revisit in 2 quarters if material conditions change.

Three use cases scored live

Real scores from anonymized client engagements. The composite does not decide — the conversation around it does.

Sales call transcription + CRM update

Score
7.65
BI7DR9F9AR8GC6
VerdictGreen-light

Legal contract review AI assistant

Score
5.85
BI9DR4F7AR6GC3
VerdictRework DR first

Generative design for manufacturing

Score
5.45
BI8DR3F4AR7GC6
VerdictRework or park

When this framework does not fit you

The TRARDI Impact Score is built for organizations with 10+ AI candidates competing for the same budget and team. If you have 1–2 ideas and no competing demands, the framework is overhead. If your use case is already funded, committed, and in motion, the framework is the wrong tool to second-guess the decision. You would use it to retrospectively justify a kill that already happened — which is political theater, not prioritization. The right moment is before Phase 2 funding, when the decision is still reversible.

FAQ

Can we use the TRARDI Impact Score without hiring TRARDI?

Yes. Phase 1 Discovery engagements include the full rubric, calibration training, and axis definitions. The framework is not a trade secret. The discipline of using it is the value we bring.

How long does scoring 20 use cases take?

3 to 5 days with the right stakeholders in one room. Day 1 calibrates the axes — teams agree on what 'Business Impact = 8' means. Days 2–4 score the backlog. Day 5 documents the decisions.

What happens when stakeholders disagree on a rating?

Disagreement is the signal. An axis rating split between 5 and 8 across a team means you have not agreed on what the axis means for this candidate. That conversation, not the composite score, is the real output.

How often do scores get refreshed?

Quarterly, or when material facts change — a new data source arrives, a regulator publishes guidance, a sponsor rotates out. Scores drift. A use case that scored 8 in January can be 5 in April if the sponsor leaves.

Discipline, not magic

The TRARDI Impact Score is not magic. It is not secret. The win is the discipline of scoring the backlog, arguing about the axes, and honoring the kill threshold when a candidate falls below 5.0. Most teams skip the third part. That is why most AI portfolios look like backlogs.

Sources

  1. Gartner (2024). Market Guide for AI Trust, Risk and Security Management.
  2. MIT Sloan Management Review (2023). The AI Deployment Gap.
  3. Accenture (2024). Scaling AI: The State of Generative AI in Enterprise.
  4. McKinsey Global Institute (2024). The state of AI in 2024.
Want this applied to your portfolio?

Book a 30-minute diagnostic

We'll walk through 2–3 of your candidates with the rubric, live. No pitch.

Book a diagnostic