PILLAR
3

Consciously
Manage Trust

Trust is the change resistance anti-venom — but undertrust and overtrust are both failure modes.
WHY THIS PILLAR EXISTS
Every change framework treats resistance as the enemy. Pillar 3 reframes: resistance is usually a trust signal. But blind trust in AI outputs is equally dangerous.

The undertrust/overtrust duality is genuinely novel — no other change framework treats both as failure modes. Undertrust produces resistance and shadow rejection. Overtrust produces automation bias, unchecked hallucinations, and ethical blind spots. Trust must be calibrated, not maximized.

WHAT IT REPLACES
OLD
NEW
Overcome resistance
Calibrate trust
Trust as binary (trust/distrust)
Trust as spectrum (undertrust ↔ overtrust)
Resistance management
Trust diagnosis
Persuasion campaigns
Psychological safety infrastructure
One-time trust building
Continuous trust calibration
DIAGNOSTIC MODEL

Trust Calibration Matrix

Can I trust my judgment?
CAPABILITY
Verification Skill
AI output evaluation skill Source verification habits Hallucination detection Confidence calibration
Why should I trust?
MOTIVATION
Trust Incentives
Technology betrayal history Identity threat from AI Org honesty about AI impact Personal AI success/failure stakes
Is it safe to trust/distrust?
TRUST
Psychological Safety
Permission to challenge outputs Safety to report failures Leadership skepticism modeling Tolerance for "not ready yet"
Am I enabled to calibrate?
OPPORTUNITY
Trust Infrastructure
AI decision transparency Nutrition labels available Feedback loops when wrong Escalation paths for concerns

Diagnose where on the trust spectrum each stakeholder sits before designing any intervention. The intervention for undertrust is entirely different from the intervention for overtrust.

PROCESS: THE TRUST METHOD
T
Test Assumptions Surface implicit trust/distrust assumptions. What do people actually believe about this AI system's reliability?
R
Read the Spectrum Map where each stakeholder sits: undertrust (rejection), calibrated trust (healthy), or overtrust (automation bias).
U
Understand the Source Is this an ability concern? A benevolence concern? An integrity concern? (Mayer, Davis & Schoorman ABI framework)
S
Shape the Environment Build trust infrastructure: transparency, feedback loops, escalation paths, psychological safety.
T
Track and Recalibrate Trust is not a one-time achievement. AI capabilities change monthly. Trust calibration must keep pace.
KEY TOOLS
Trust Barometer — pulse check: "Do you trust this output?" mapped across teams and use cases
Algorithm Nutrition Labels — data sources, known biases, intended use, confidence intervals
Skeptic Roundtables — dedicated sessions to vent concerns without judgment
"Open Kitchen" Demos — showing messy, unfinished AI work to build realistic expectations
Risk/Fear Register — structured log of "what keeps me up at night" concerns
Ethical "Stop" Cord — frontline veto power on AI deployment
PRACTITIONER BEHAVIORS
Vulnerability First Leaders admitting ignorance
Calling Out "The Killers" Flagging trust-destroying phrases
Consistency Doing what you say you will
Assume Positive Intent Charitable skepticism interpretation
Trust Audit Habit Regularly checking own calibration
Transparent Uncertainty "70% confident" over false certainty
Escalation Without Blame Failures as learning signals
Overtrust Vigilance Questioning "too easy" moments
LEADERSHIP DELTA
THE TRUST DELTA

Leaders must model calibrated trust — neither the AI evangelist who dismisses all skepticism, nor the fearful leader who blocks all experimentation. Both destroy trust.

Psychological safety is the precondition for trust calibration. If people cannot safely say "I don't trust this output," they will comply without trusting — and the organization learns nothing about where AI is actually reliable.

COMMON FAILURE MODES
Treating all skepticism as resistance — when some of it is accurate risk assessment
Overtrusting AI outputs because they "sound confident" — automation bias
Building trust through persuasion rather than transparency and track record
One-time trust-building events that don't account for shifting AI capabilities
Psychological safety as HR program rather than leadership behavior
Ignoring the undertrust/overtrust spectrum — treating trust as binary
INTELLECTUAL BACKDROP
Mayer, Davis & Schoorman — ABI trust model (1995)
Edmondson — Psychological safety (1999)
Parasuraman & Riley — Automation bias (1997)
Lee & See — Trust in automation (2004)
Gibbons — Trust calibration spectrum (2026)