Why traditional change frameworks fail in the AI Era
From rigid steps to adaptive mindsets: building change for the age of AI.
Welcome back to Atomic Leadership. In this article, we’ll explore what it truly takes to lead through disruption - and why the classic change management models so many leaders lean on, like ADKAR and Kotter, are falling short in the face of AI-driven transformation.
This is a topic close to my heart: after nearly two decades leading customer operations, coaching leaders at all levels and consulting different organisations on leading digital transformation, I’ve seen firsthand how old frameworks break down when technology shifts faster than culture can keep up.
AI is rewriting the rules, like it or not, and if leaders don’t adapt their playbook now, they’ll be left behind. Some already are.
Change. We’ve been talking about it since…forever. You know that saying, “the only constant is change”? We’ve been hearing it over and over. New systems, new processes, new org charts. Every leader has had “managing change” somewhere on their checklist.
But…
Most of the change management playbooks we rely on today were built for a world without AI. A world where change moved in predictable waves. Where leaders had time to prepare people, test things slowly, and roll out a neat communication plan.
That world doesn’t exist anymore.
AI has ripped up the timeline. Change is no longer a steady wave - it’s a flood that keeps coming, faster each month. Leaders who cling to old frameworks end up drowning their teams in resistance, confusion, and fear.
So why exactly does traditional change management fail in the age of AI?
It assumes time you don’t have
Old models are step-by-step: awareness, desire, knowledge, ability, reinforcement. Sounds tidy. But AI adoption doesn’t wait for your 6-month rollout plan. Tools appear overnight, employees try them without approval, and competitors are already ahead before your kickoff workshop.
It underestimates emotions
AI isn’t just a new tool - it feels threatening. Will it replace my job? Can I trust it? Do my skills still matter? Traditional change methods skim over these fears. In the AI era, emotions are the change. If leaders don’t address them openly, resistance festers.
It treats people like an afterthought
Classic change frameworks were designed for system upgrades, not identity shifts. But AI changes how we see ourselves at work. From decision-making to creativity, people are redefining their value. Leaders who don’t invite employees into that conversation risk alienating the very people they need to succeed.
Take Satya Nadella, one of the world’s leading technologists and CEO of Microsoft:
“Technology changes quickly, but trust changes slowly. Leaders must bridge that gap if they want their people to embrace AI as a partner rather than a threat."
- Satya Nadella, Hit Refresh: The Quest to Rediscover Microsoft’s Soul and Imagine a Better Future for Everyone (2017)
Satya Nadella also said in an interviewing that everyone is taking about change, but nobody wants to change, you’d rather have the other person change and you remain the same. True for many leaders today, that demand change only to those they lead but fail to change themselves.
Understanding the “what’s wrong with it”
Let's look at the two of the most widely used change frameworks for change management to better understand what are the opportunities.
ADKAR (Awareness, Desire, Knowledge, Ability, Reinforcement)
Designed for: linear, planned transformations (e.g., rolling out a new CRM system).
Where it breaks down in AI-driven change:
Linear steps vs. exponential change: ADKAR assumes you can move people step-by-step through a predictable sequence. AI adoption doesn’t work like that - tools evolve weekly, and employees often experiment with them before leadership has even defined a plan. The neat sequence collapses under speed.
Over-focus on individual change: ADKAR centers on individual employee behavior, but AI disruption is systemic - it changes roles, workflows, organizational identity, and culture. It’s not enough to manage one person’s awareness and ability.
Knowledge bottleneck: The “Knowledge” stage assumes leaders can provide stable, structured training. But with AI, knowledge is fluid. What you teach today may be outdated in six months.
👉 Bottom line: ADKAR is too slow, too rigid, and too individualistic for an environment where change is continuous, collective, and accelerated.
Kotter’s 8-Step Model
Steps: Create urgency → Build coalition → Develop vision → Communicate → Remove barriers → Generate wins → Sustain acceleration → Anchor in culture.
Where it breaks down in AI-driven change:
Urgency is already here: Kotter starts with “create a sense of urgency.” In AI adoption, urgency doesn’t need to be manufactured - employees already feel it (or fear it). The problem isn’t urgency; it’s anxiety. Leaders need to calm, not hype, their people.
Coalitions can’t keep pace: Building guiding coalitions and consensus takes time. By the time your steering committee agrees, the AI landscape has already shifted.
Communication overload: Kotter emphasizes vision communication. But in an AI world, dialogue is more important than broadcasting. Leaders need feedback loops, not one-way comms.
Anchoring in culture doesn’t stick: AI transformation isn’t a one-off project you can “anchor.” The culture itself must evolve continuously, because AI capabilities evolve continuously.
👉 Bottom line: Kotter’s model assumes change is episodic and can be stabilized. AI-driven change is perpetual - it never stabilizes.
Funny thing - in 2014 in his book “Accelerate (XLR8)”, Mr. Kotter himself - the man that invented the Kotter framework - admitted that his original 8-step model isn’t fast enough for the fast changes of our days. He explicitly said the old model wasn’t enough anymore and that organizations needed to build strategic agility.
"Change used to be episodic, but in today’s environment it’s continuous and relentless. Organizations that cling to old models will simply be outpaced."
- John P. Kotter, Accelerate
Essentially, both ADKAR and Kotter share assumptions that no longer hold:
Change is episodic: A project with a beginning and an end. (Reality: AI is a constant stream of shifts.)
Change is leader-directed: Leaders define the path, employees follow. (Reality: employees often drive AI adoption bottom-up, experimenting before leadership.)
Change is knowable: Leaders can design the “future state” and move people toward it. (Reality: with AI, the future state is emergent, unpredictable, and constantly shifting.)
My Point
As leaders managing change, we should be focusing on:
speeding up the cycle: shorten change rollouts from months to weeks. Share updates often, even if incomplete.
leaning into dialogue, not broadcasts: create open forums where employees can ask questions about AI without judgment.
framing AI as a partnership, not a replacement: remind people their skills matter, and show how AI enhances rather than erases their contributions.
building adaptability as a skill: train teams to be curious, flexible, and experimental. Resilience means bouncing back. Adaptability means moving forward.
Leaders who get this right don’t just survive AI-driven change - they thrive in it. They build teams that are less fearful, more curious, and more willing to experiment.
The age of AI demands new leadership muscles. Not the old “manage change” muscle, but the “lead through uncertainty” one. And that’s a skill you can start building today.
Ok so…
🌍 What am I proposing?
Leaders need a shift from managing change to leading adaptability. That means:
Continuous change loops, not linear stages: Borrow from agile and lean - test, learn, adapt. Change never “finishes.” It’s a continuous loop. It keeps going.
Collective sensemaking: Create open forums for teams to explore AI together, instead of pushing top-down comms. Hello, town halls. More of them please. Less of memos & heavy email comms, please.
Adaptive learning over fixed knowledge: Teach people how to learn, not just what to learn. Build “learning agility” as a muscle. Hello, micro-learnings & demos. Bye-bye, heavy, long & rigid one-time trainings.
Psychological safety & promoting experimentation: Fear is the biggest blocker in AI adoption. Leaders must normalize uncertainty and make it safe to experiment.
It’s ok to fail small in order to win big. Big wins are a collection of small wins and failures.
Retros are valuable, but a one-off ‘after X days’ retro is not enough - make retros part of continuous feedback loops so learnings are measured, assigned, and integrated into work and governance.
Quick note here about experimentation (I’m writing about this in a separate article soon): Experimentation needs guardrails in AI projects. “Fail fast” without ethical, privacy, and safety guardrails is dangerous in AI. Experiments must be risk-assessed, sandboxed, and subject to ethical checkpoints.
Ethics and trust at the center: Unlike past tech changes, AI touches identity, fairness, and values. Trust-building becomes the primary leadership function.
✨ The New Imperative
Kotter and ADKAR helped in a world where change was episodic, predictable, and largely technical. But AI isn’t just a new system - it’s a new paradigm.
Change frameworks that treat disruption like a checklist will keep failing.What leaders need now is an adaptive, human-centered approach: one that embraces uncertainty, accelerates cycles, and puts trust at the heart of transformation.
About Me
Hi, I’m Benonica Angelova. With 16 years as a CX Leader, People Manager, and Coach, I created this Substack to empower people to transform their careers and their relationship with work, colleagues, and themselves. I mentor startup leaders & founders, and I write about leadership, coaching, startups, and the role of AI in shaping our world. If you’re looking for a mentor, let’s talk.





