Back to home
MorAIlity

The MorAIlity Operating Model

Our craft,
made public.

Two frameworks. Six phases. Zero guesswork. This is how MorAIlity scores every AI use case, ships systems that hold, and measures adoption beyond vanity metrics.

This page exposes MorAIlity's operating model for prioritization, build, and adoption. The dedicated assurance methodologies — system audit, sovereign exposure review, post-deployment assurance — are published at version 0 on the Framework page, as a distinct TRARDI Assurance discipline.

Framework 01

TRARDI Impact Score™

We score every AI use case across 5 axes, weighted by what determines success. No gut feeling, no vendor pitch. A transparent composite score that tells you where to start, what to defer, and what to kill.

0130%

Business Impact

The ROI, risk reduction, or strategic value the use case unlocks. The axis that pays for the project.

0220%

Data Readiness

Whether the data needed actually exists, is clean, labeled, and accessible without months of prep.

0320%

Feasibility

Whether the solution is a proven pattern, custom engineering, or a research problem in disguise.

0415%

Adoption Readiness

Whether users want the tool, tolerate it, or will actively block it. The #2 reason AI projects fail.

0515%

Control & Governance Exposure

Regulatory surface, required controls, audit trail, documentation maturity, and supervisory load. Applies to every system — higher in regulated sectors, still measurable everywhere.

Composite formula
Score=(BI × 0.30)+(DR × 0.20)+(F × 0.20)+(AR × 0.15)+(GC × 0.15)

Each axis scored 1–5, where 5 is always the best outcome. Result is a 0–5 composite, converted to a 0–100 percentage.

Scoring rubric
Axis1 / 52 / 53 / 54 / 55 / 5
Business ImpactNo business case<€50k/year — nice-to-have€50k–100k/year — efficiency€100k–500k/year — significant>€500k/year OR strategic differentiator
Data ReadinessNo data or access blockedPartial, significant cleaning neededFragmented, integration requiredExists, light ETL neededClean, labeled, accessible (>10k samples)
FeasibilityResearch problem (2+ years out)Frontier approach, significant R&DCustom architecture but within SOTAKnown patterns (RAG / fine-tuning)Off-the-shelf LLM solves it
Adoption ReadinessHR / union / contractual blockersActive resistance, needs a championSkeptical, needs value demoNeutral, workflow adjustableUsers actively asking for it
Control & Governance ExposureHigh-risk per EU AI Act · heavy control loadRegulated sector, DPIA mandatory · significant controlsStandard B2C with GDPR · baseline controls neededStandard B2B, no sensitive PII · light controlsLow exposure · internal tool, minimal supervisory load

Go — Priority 1

≥ 4.0

Go — Priority 2

3.0 – 3.9

Defer

2.0 – 2.9

Kill

< 2.0

Red flag rules — override the composite
01
Feasibility = 1/5Auto-Kill

Can't be built. Business case doesn't matter.

02
Business Impact = 1/5Auto-Kill

No ROI. Likely a vanity project.

03
Data Readiness = 1/5Auto-Defer

Unblock data access before re-evaluating.

04
Adoption Readiness = 1/5Auto-Defer

Resolve HR / union blockers first.

Operating model

Six phases.
Every one with a measurable output.

No phase without a clear deliverable. The Impact Score shapes the early decisions; the Velocity Index tracks real adoption through deployment and informs governance at scale.

Phase 011–2 weeks

Read the organization

Discovery

Phase 021 week

Kill false starts early

Arbitration

Phase 033–6 weeks

Prove the value

Architecture & Prototype

Phase 042–4 weeks

Ship to production

Production Deployment

Phase 052–6 weeks

Install the usage

Adoption Program

Phase 06Ongoing

Make it last

Governance & Scale

Framework 02

Adoption Velocity Index™

Most AI projects fail not on tech, but on adoption. Vanity metrics like “12,000 logins” hide the truth. Our Velocity Index tracks the three metrics that matter: active users, task completion, time-to-autonomy. Weekly, with automated alerts.

AU50%

Active Users

% of the target population using the tool at least once in the past 7 days.

Healthy: ≥ 80% by Week 8

Red flag: Drop >10 pts week-over-week

TCR30%

Task Completion Rate

Of tasks started in the tool, % actually completed (not abandoned).

Healthy: ≥ 75% sustained

Red flag: < 60% for 2 consecutive weeks

TTA20%

Time-to-Autonomy

Median days between training and first solo task without a coach.

Healthy: < 14 days

Red flag: > 30 days median

Composite formula
AVI=(0.5 × AU)+(0.3 × TCR)+(0.2 × (1 − min(TTA/30, 1)))

AU and TCR expressed as 0–1 ratios. TTA in days, capped at 30. Result is a 0–1 index, multiplied by 100 for a 0–100 score.

Overall AVI interpretation
100%
75%
50%
25%
0%
20%
50%
80%
90%
W1Pilots
W4Early majority
W8Critical mass
W12Saturation

Healthy

≥ 75

Saturation in progress

Slowing

50 – 74

Investigation + targeted coaching

Stalled

< 50

Executive escalation required

Phase 01 — Discovery

What you receive after Phase 1.

A fixed-price 1–2 week engagement that produces a concrete decision package — not a 50-page slide deck. Every item below is a real artefact handed over at the end of Phase 1.

  • 01AI Readiness Report — scoped to your organization, not a template
  • 02Scored use-case portfolio — each candidate rated across the 5 Impact Score axes
  • 03Go / no-go decision per candidate, with the rationale
  • 04Architecture direction for the top candidates (target stack, deployment mode)
  • 05Executive debrief with next-step recommendation (audit, prototype, or stop)

Our stance

Why we publish our craft.

Most consultancies treat their method as a black box — a mystique to justify the bill. We think the opposite: the method is the product. If you can read it, challenge it, and adapt it, you can trust it. And when the engagement ends, you keep the method.

Apply this methodology to your organization.

Book a 30-minute diagnostic call. We can score your top use cases — or assess an existing AI system already in operation — live, with the TRARDI Impact Score™. No commitment required.