Adaptive Adoption · Pillar 5 of 7
5
Design &
Prototype
"We don't plan our way to the future; we prototype our way there."
Why This Pillar Exists
Traditional change assumes a known future state and builds a plan to get there. AI adoption has no known future state — the technology shifts faster than any plan can track.
The future state fallacy is the most expensive assumption in enterprise change. When the target moves quarterly, investing months in a fixed plan produces an artifact that's obsolete on delivery. Pillar 5 replaces the plan with the sprint — fast, cheap experiments that let the organization learn its way to the future rather than predict it.
What It Replaces
Fixed future state → Emergent direction through iteration
Business case → Hypothesis
Pilot program → Sprint experiment
Annual planning → Weekly build-measure-learn cycles
Requirements gathering → Rapid prototyping
Change readiness → Experiment readiness
Diagnostic Model
Can I?
Capability
Design & Build Skills
  • Hypothesis formulation
  • Rapid prototyping / no-code tooling
  • Measurement design
  • Context engineering & orchestration
Why should I?
Motivation
Perfection Addiction
  • Fear of unfinished work
  • "Need more data" paralysis
  • Sunk cost attachment
  • Risk aversion rewarded
Should I?
Trust
Permission to Experiment
  • Safety to fail publicly
  • "Rough work" valued
  • Budget for experiments
  • Leadership tolerance
Am I enabled?
Opportunity
Experiment Infrastructure
  • No-code/low-code sandboxes
  • Protected tinkering time
  • Sprint cadence established
  • Gallery for sharing results
Most organizations have the desire to experiment but lack the infrastructure. Pillar 5 diagnoses which barrier is primary — and it's usually Opportunity.
The Build Process
B
Bound the Experiment
Define the hypothesis, the minimum viable test, and the kill criteria. Not "let's try AI" but "we believe [X] will produce [Y], and we'll know in [Z] time."
U
Unbundle the Problem
Break the initiative into independently testable components. Prototype the riskiest assumption first.
I
Iterate in Sprints
1-2 week cycles. Build the smallest thing that tests the hypothesis. Measure. Adjust. Repeat.
L
Learn Before Scaling
Extract the learning, not just the outcome. Why did it work? Under what conditions? What transfers?
D
Decide: Scale, Pivot, or Kill
Every experiment ends with a decision. No zombie pilots that run forever without evaluation.
Key Tools
  • No-Code / Low-Code Sandboxes — tools for non-technical builders to prototype AI workflows
  • Experiment Canvas — hypothesis, test design, success criteria, kill criteria on one page
  • Prototype Gallery — digital museum of experiments (successes AND failures)
  • Sprint Templates — structured 1-2 week experiment cycles
  • Value Capture Tracker — measuring what was learned, not just what was built
  • Innovation Myths Deck — cards that challenge limiting beliefs about experimentation
Practitioner Behaviors
Bias for Action
"Rough Work" Transparency
Falling in Love with the Problem
Context Engineering
Orchestration
Kill Discipline
Hypothesis First
Celebrate Dead Ends
Leadership Delta

Stop funding plans. Start funding experiments. The ROI of a prototype is learning, not revenue. Leaders who demand business cases for experiments will get safe bets that teach nothing.

The design-and-prototype leader protects tinkering time, celebrates dead ends that generated learning, and asks 'what did we learn?' before 'did it work?'. Failure tolerance without learning extraction is just waste.

Common Failure Modes
Zombie pilots — experiments that run forever without evaluation or kill criteria
Perfectionism disguised as thoroughness — "we need more data" as avoidance of action
Scaling before understanding — replicating a pilot's outputs without replicating its conditions
Prototyping without hypotheses — "let's try AI" is not an experiment, it's tourism
Innovation theater — hackathons and demo days that produce excitement but no sustained practice
Confusing prototype failure with initiative failure — killing the program because one experiment didn't work
Intellectual Backdrop
  • Ries — Lean Startup, build-measure-learn (2011)
  • Brown — Design thinking, IDEO (2009)
  • Sarasvathy — Effectuation, entrepreneurial logic (2001)
  • Snowden — Safe-to-fail probes, Cynefin (1999)
  • Gibbons — Prototype-native change design (2026)