Adaptive Adoption · Pillar 7 of 7
7
Manage
Ethics Always
"Ethics isn't a brake — it's the steering that allows speed. Every catastrophic ethics failure happened inside organizations with compliance frameworks."
Why This Pillar Exists
Business ethics in practice is a legal risk mitigation function. Governance dashboards tell you whether you followed the process. They cannot tell you whether the process was worth following.
VW's defeat device passed through compliance. Enron had a printed ethics code. Wells Fargo had governance dashboards during the fake accounts scandal. Boeing had safety compliance while the 737 MAX killed people. Every catastrophic ethics failure happened inside organizations that had compliance frameworks. Ethics must be a practiced frontline capability, not a governance function delegated to legal.
What It Replaces
Old
Ethics as compliance checklist
New
Ethics as practiced capability
Old
Governance dashboards
New
Frontline ethical reasoning
Old
Legal risk mitigation
New
Moral imagination
Old
Annual ethics training
New
Ethics embedded in every sprint
Old
"Can we?"
New
"Should we?"
Old
Ethics committee review
New
Distributed ethical practice
Diagnostic Model — Four Ethics Barriers
Capability
"Can I?"
Ethical Reasoning Skill
• Harm-focused pre-mortem ability
• Stakeholder impact mapping
• Bias detection in AI outputs
• Second-order consequence thinking
Motivation
"Why should I?"
Speed vs. Ethics Tension
• "Ethics slows us down" belief
• ROI pressure overriding ethical reflection
• Moral disengagement ("not my department")
• Diffusion of responsibility in AI systems
Trust
"Should I?"
Safety to Raise Concerns
• Psychological safety to say "stop"
• Track record when people raised ethical flags
• Whistleblower protection (real, not policy)
• "Stop the line" authority actually respected?
Opportunity
"Am I enabled?"
Ethics Infrastructure
• Structured ethical reflection in sprint cadence?
• Ethics canvas / pre-mortem tools available?
• Model cards / nutrition labels deployed?
• Escalation path for ethical concerns exists?
Note: The most common failure: organizations locate ethics in legal/compliance rather than in the hands of people doing the work. Ethics that lives in a department will never catch the harms that emerge from frontline use.
Process — ETHIC Method
E
Examine the Harm Surface — Who is affected by this AI system? Who is affected differently? What is the worst plausible misuse? Run a harm-focused pre-mortem before deployment.
T
Test Assumptions About Fairness — What are we assuming about bias, equity, and differential impact? Are those assumptions tested or hoped?
H
Hear the Dissent — Create structured space for ethical objection. Red-team the deployment. Give frontline teams "stop the line" authority.
I
Iterate Ethically — Ethics isn't a gate at the end — it's a reflection built into every sprint. What new ethical questions emerged this cycle?
C
Calibrate Continuously — AI capabilities shift monthly. Ethical evaluation must keep pace. Yesterday's acceptable risk may be today's harm.
Key Tools
Ethics Canvas — Business Model Canvas adapted for ethical risk: who is affected, what could go wrong, what are we assuming

Ethical Pre-Mortem — "Black Mirror" session: imagine the worst plausible ethical failure and work backward

Model Cards / Nutrition Labels — data sources, known biases, intended use, confidence intervals for every AI system

Stakeholder Impact Maps — mapping those subject to AI decisions, not just those making them

HITL Gates — Human-in-the-Loop sign-offs calibrated to risk level

Red Teaming Rituals — designated Devil's Advocate with structured authority to challenge
Practitioner Behaviors
"Should We > Can We" — the moral pause button before every deployment
"Stop the Line" Authority — frontline veto power, actually used
Radical Transparency — admitting AI limitations publicly
Constructive Paranoia — watching for failure modes others dismiss
Ethical Sprint Reflection — structured conversation every cycle
Bias Vigilance — actively looking for differential impact
Moral Courage — speaking truth to ROI when ethics demands it
Stakeholder Empathy — centering those affected, not those deciding
Leadership Delta
Ethics is not a department. It is a leadership behavior. Leaders who delegate ethical reasoning to compliance will discover — too late — that compliance cannot catch emergent harms from AI systems that evolve faster than policy.
The ethical leader practices moral reasoning publicly, creates genuine psychological safety for dissent, and treats "stop the line" as a sign of organizational health rather than a threat to speed. The VW engineers knew. The Boeing engineers knew. They didn't speak up. That's a leadership failure, not a compliance gap.
Common Failure Modes
Governance theater — dashboards that measure process compliance, not ethical outcomes
Ethics as a gate at the end rather than a practice throughout
Locating ethics in legal/compliance rather than in frontline practice
"Move fast and break things" culture that treats ethical reflection as friction
Model cards that exist but nobody reads — compliance artifacts, not decision tools
Diffusion of responsibility — "the algorithm decided" as moral abdication
Intellectual Backdrop
Aristotle — phronesis (practical wisdom), Nicomachean Ethics

Arendt — banality of evil, moral thinking as practice (1963)

Rest — Four-component model of moral behavior (1986)

Gibbons — ethics as practiced capability, The Science of Organizational Change (2015)

Floridi — AI ethics, information ethics (2023)

Gebru et al. — Model cards for model reporting (2019)