Business Impact
The ROI, risk reduction, or strategic value the use case unlocks. The axis that pays for the project.
The MorAIlity Operating Model
Two frameworks. Six phases. Zero guesswork. This is how MorAIlity scores every AI use case, ships systems that hold, and measures adoption beyond vanity metrics.
This page exposes MorAIlity's operating model for prioritization, build, and adoption. The dedicated assurance methodologies — system audit, sovereign exposure review, post-deployment assurance — are published at version 0 on the Framework page, as a distinct TRARDI Assurance discipline.
We score every AI use case across 5 axes, weighted by what determines success. No gut feeling, no vendor pitch. A transparent composite score that tells you where to start, what to defer, and what to kill.
The ROI, risk reduction, or strategic value the use case unlocks. The axis that pays for the project.
Whether the data needed actually exists, is clean, labeled, and accessible without months of prep.
Whether the solution is a proven pattern, custom engineering, or a research problem in disguise.
Whether users want the tool, tolerate it, or will actively block it. The #2 reason AI projects fail.
Regulatory surface, required controls, audit trail, documentation maturity, and supervisory load. Applies to every system — higher in regulated sectors, still measurable everywhere.
Each axis scored 1–5, where 5 is always the best outcome. Result is a 0–5 composite, converted to a 0–100 percentage.
| Axis | 1 / 5 | 2 / 5 | 3 / 5 | 4 / 5 | 5 / 5 |
|---|---|---|---|---|---|
| Business Impact | No business case | <€50k/year — nice-to-have | €50k–100k/year — efficiency | €100k–500k/year — significant | >€500k/year OR strategic differentiator |
| Data Readiness | No data or access blocked | Partial, significant cleaning needed | Fragmented, integration required | Exists, light ETL needed | Clean, labeled, accessible (>10k samples) |
| Feasibility | Research problem (2+ years out) | Frontier approach, significant R&D | Custom architecture but within SOTA | Known patterns (RAG / fine-tuning) | Off-the-shelf LLM solves it |
| Adoption Readiness | HR / union / contractual blockers | Active resistance, needs a champion | Skeptical, needs value demo | Neutral, workflow adjustable | Users actively asking for it |
| Control & Governance Exposure | High-risk per EU AI Act · heavy control load | Regulated sector, DPIA mandatory · significant controls | Standard B2C with GDPR · baseline controls needed | Standard B2B, no sensitive PII · light controls | Low exposure · internal tool, minimal supervisory load |
Go — Priority 1
≥ 4.0
Go — Priority 2
3.0 – 3.9
Defer
2.0 – 2.9
Kill
< 2.0
Can't be built. Business case doesn't matter.
No ROI. Likely a vanity project.
Unblock data access before re-evaluating.
Resolve HR / union blockers first.
No phase without a clear deliverable. The Impact Score shapes the early decisions; the Velocity Index tracks real adoption through deployment and informs governance at scale.
Discovery
Arbitration
Architecture & Prototype
Production Deployment
Adoption Program
Governance & Scale
Most AI projects fail not on tech, but on adoption. Vanity metrics like “12,000 logins” hide the truth. Our Velocity Index tracks the three metrics that matter: active users, task completion, time-to-autonomy. Weekly, with automated alerts.
% of the target population using the tool at least once in the past 7 days.
Healthy: ≥ 80% by Week 8
Red flag: Drop >10 pts week-over-week
Of tasks started in the tool, % actually completed (not abandoned).
Healthy: ≥ 75% sustained
Red flag: < 60% for 2 consecutive weeks
Median days between training and first solo task without a coach.
Healthy: < 14 days
Red flag: > 30 days median
AU and TCR expressed as 0–1 ratios. TTA in days, capped at 30. Result is a 0–1 index, multiplied by 100 for a 0–100 score.
Healthy
≥ 75
Saturation in progress
Slowing
50 – 74
Investigation + targeted coaching
Stalled
< 50
Executive escalation required
Phase 01 — Discovery
A fixed-price 1–2 week engagement that produces a concrete decision package — not a 50-page slide deck. Every item below is a real artefact handed over at the end of Phase 1.
Our stance
Most consultancies treat their method as a black box — a mystique to justify the bill. We think the opposite: the method is the product. If you can read it, challenge it, and adapt it, you can trust it. And when the engagement ends, you keep the method.
Book a 30-minute diagnostic call. We can score your top use cases — or assess an existing AI system already in operation — live, with the TRARDI Impact Score™. No commitment required.