The undertrust/overtrust duality is genuinely novel — no other change framework treats both as failure modes. Undertrust produces resistance and shadow rejection. Overtrust produces automation bias, unchecked hallucinations, and ethical blind spots. Trust must be calibrated, not maximized.
Trust Calibration Matrix
Diagnose where on the trust spectrum each stakeholder sits before designing any intervention. The intervention for undertrust is entirely different from the intervention for overtrust.
Leaders must model calibrated trust — neither the AI evangelist who dismisses all skepticism, nor the fearful leader who blocks all experimentation. Both destroy trust.
Psychological safety is the precondition for trust calibration. If people cannot safely say "I don't trust this output," they will comply without trusting — and the organization learns nothing about where AI is actually reliable.