Why Better Management Doesn't Lead to AI Success
Reading the title of this video,
you could be forgiven for thinking.
We are going to talk about
a leadership problem.
It's a natural conclusion given the
number of articles, posts, and even videos
that discuss the challenge in this way.
Then there's the data point.
Microsoft studied 300,000 users found that
80% abandoned AI tools within three weeks.
Three weeks after budgeted training, after
the rollout, after the executive mandate
and the change management consultants,
the Slack channel that nobody reads.
They stopped using the AI
tools that everyone says will
change life as we know it.
Welcome to the Enduring Advantage podcast.
I'm your host, Zachary Alexander.
The headline writes itself: 201
problems lead to mass abandonment.
What this means is that people
who got past the basics but
couldn't get to the next level.
And when you see abandonment at this
scale, the instinct is to reach for
better leadership as the answer.
Vision, culture, executive commitment,
the things that feel strategic.
It's a reasonable instinct.
It's also wrong.
Not because leadership doesn't matter.
But because leadership, like
management assumes the problem lives
in the operatorâ the employee, and
their motivation, their discipline,
their unwillingness to push through.
What if the problem isn't
in the operator at all?
What if the organization deployed
every tool except the one that
would actually make them effective?
Before we go any further, let's
talk about how we got here.
AWS holds a reinvent every December.
The flagship conference sessions
are labeled by skill level 100,
200, 300, 400, and sometimes 500.
The number tells you
who the session is for.
100 is foundational.
You're new to AWS.
New to the topic, basic
concepts, high level use cases.
200 is intermediate.
You have a working knowledge.
You're not starting from zero.
Best practices, common patterns.
you're expected to be
familiar with each of these.
300 and above.
You've built things,
you're deep into the topic.
You consider architecture decisions,
implementation details, and edge cases.
The 201 problem is the
gap between 200 and 300.
Operators with working knowledge,
who can't reach the next level.
Not beginners, not experts.
Someone stuck in the
middle with nowhere to go.
They could also be tourists not willing
to make the commitment to change.
We've talked about tech
tourists in the past.
Someone that's not willing
to make the commitment to
change that seems reasonable.
Tech tourists are the only
here for the easy money.
Once things get messy, they bail.
But today, let's stick with
the intermediate operators, the
ones who got past the basics but
weren't willing to stick with
it long enough to cross the gap.
Now, there's a lot of reasons why
intermediate two oh ones abandoned AI
during the initial or bolt-on phase.
The training didn't stick.
three hours in a conference room.
Watching someone Demo prompts
doesn't translate when you're
behind on deliverables and the
tool isn't doing what you expected.
maybe the use case didn't
address your specific situation.
Leadership showed you how AI writes
emails and summarizes documents.
But your job isn't about
emails and summaries.
Your job is navigating the new
client, relationship landscape
with not so much as a compass.
There's stuff you haven't fixed yet and
too much client interaction without a
solution is gonna cause things to blow.
Most generic use cases don't survive
contact with your specific work And
the workflow disruption isn't worth it.
You have a system, it works
perfectly, but it's yours.
you take it home with you.
When you walk out the door,
you tuck it in at the end of
the night when the day is done.
Unfortunately, the current generation
of AI tools can't generate the
same quality of effort yet.
Who knows, in seven months what you're
asking for could be table stakes.
Management can't reinforce it.
Using traditional incumbent strategies,
the mandate comes down, the tools
get deployed and then silence.
No follow up, no accountability.
No one is asking if what you're building
with is going to fix this problem.
Time pressure is the
great strategy killer.
I used to build enterprise architectures
with great documents, and they launched
to incredible fanfare they never
lasted, passed the third generation.
The problem is, is that in traditional
incumbent companies, you're paid to
deliver and when the quarter gets
tight, the AI tab closes first and
people rely on what's been working.
Most companies still think doing it
right is a luxury they can't afford.
Today's executives are
on a three year timeline.
They've gotta make as big an impression
as possible and then onto the next gig.
Unfortunately, traditional
incumbent companies are filled with
self-serving firefighters, looking
for opportunities to play the savior.
These all sound reasonable.
They all point to fixable problems,
better training, clearer expectations,
stronger reinforcement, and every single
one of them assumes the same thing.
The problem lives in the operator.
Think about it.
Training didn't stick.
That means the operator
didn't learn enough.
Use cases weren't specific enough.
That means the operator didn't
adapt them to their situation.
Workflow disruption wasn't worth it.
That means the operator
didn't push through.
Management didn't reinforce.
That means the operator
needed more accountability,
Time pressure.
The operator didn't prioritize correctly.
Really.
Every diagnosis points
in the same direction.
Every solution asks the same thing.
Get the operator to change.
Learn better, adopt faster, push
harder, prioritize differently.
And here's what's interesting.
This assumption isn't malicious.
It's not some executive
conspiracy to blame the workforce.
It's the way things have always
worked in traditional companies.
When deploying new
tools, you deploy a tool.
People don't use it.
Management thinks the
people are the problem.
The employees think, yeah, this was
good, but we'll have to wait for the
next regime to make it really work.
But there's another possibility.
what if you deployed the wrong tool?
Not the wrong ai.
The AI works fine.
GPT works fine.
Claude works, copilot works.
The models aren't the issue.
What if you deployed AI but withheld
the tool that makes AI productive
for intermediate operators?
What if the gap isn't in the
operator's unwillingness, but in
the organization's architecture?
So let's take better management
seriously for a moment.
What does better management
of AI actually look like?
You improve the rollout, you would
go to phase deployment instead of
Zachary Alexander: big bang . You pilot
program groups before they go company
wide You obtain lessons learned feedback.
You improve the training.
Not three hours in a conference
room, ongoing engagement, office
hours, champions in each department.
Slack channels with real humans
answering real questions.
You improve the reinforcement
managers check in weekly.
Usage dashboards track adoption.
KPIs include a I utilization.
There are consequences for
teams that don't engage
You protect the time: innovation
fridays, experimentation budgets,
permission to fail while learning.
This is the management
playbook and it's not stupid.
Every one of these interventions
addresses a real friction point.
if your rollout was chaotic.
Phase deployment helps.
If your training was generic,
ongoing enabled, and it helps.
If no one followed up,
reinforcements help.
But notice what all of
these have in common.
They're still asking the operators to
do the work needed to cross the gap.
You're just making the crossing slightly
easier, better stairs, a handrail,
someone cheering from the other side.
The gap is still there.
The intermediate operator still
has to take generic capabilities
and figure out how to apply them
to their specific situation.
They still have to build a bridge.
Better management reduces friction.
It doesn't shrink the gap.
And when the quarter gets tight,
when the real work piles up,
friction reduction isn't enough.
The gap wins.
Okay, so management isn't enough.
What about leadership?
This is where the
conversation usually goes.
Management is mechanics
rollout, training reinforcement.
Leadership is meaning vision.
The why behind the transformation.
So what does better leadership of AI
transformation actually look like?
You cast the vision.
AI is a tool, it's a strategic imperative.
The future of the company depends on this.
You paint the picture, what's
possible when everyone embraces it.
you model the behavior.
The CEO uses AI visibly.
Senior leaders share their
prompts in all hands meetings.
The message from the top is unmistakable.
This matters.
you create psychological safety.
Failure is learning.
Experimentation is celebrated.
No one gets ponies for trying
something that doesn't work.
you connect it to the purpose.
This isn't about efficiency matrix.
This is about the kind of company we
want to be, the kind of work we want
to do, the future we're building.
This is a leadership playbook.
It's not stupid either.
Vision matters, modeling matters.
Safety matters, purpose matters.
But notice what's happening.
you're still asking the operator
to do the work, to cross the gap.
You just added meaning to the request,
better stairs, a handrail, someone
cheering from the other side, and
now a compelling speech about why
the other side is worth reaching.
The gap is still there.
The intermediate operator still wakes
up Tuesday morning with a difficult
client situation and a generic AI tool.
The leadership gave them purpose.
Leadership gave them permission.
Leadership gave them inspiration.
Leadership didn't give them the
ability to connect to the other side.
AI can help.
Here's how AI helps with the specific
situation I'm facing right now.
Better leadership and motivation.
It doesn't shrink the gap.
And here's the uncomfortable truth.
Motivation without capability is
just frustration with extra steps.
You convince the operator the
other side is worth reaching.
They still can't get there.
Now they feel worse about it.
Management says, try harder.
Leadership says, believe harder.
Neither said, here's the bridge.
There are entire industries
built on the assumption that the
problem lives in the operator.
Training companies, change management
consultants, learning management
systems, certification programs,
coaching frameworks, assessment tools.
Billions of dollars a year, all
predicated on the same thesis.
If you just train people better,
they'll be able to cross the gap and
it's not, the training isn't worth it.
Training has value, but training
is a friction reduction play.
It makes the crossing slightly easier.
It doesn't shrink the gap.
The training industry complex
needs the gap to exist.
Their business model depends on it.
Every year, new tools every year,
new training programs, every year
the same 80% abandonment rate.
The incumbent approach asks, how do we get
more people through our training pipeline?
The infrastructure approach asks, how do
we make the training pipeline easier to
navigate, not unnecessary for everything.
They're still onboarding, still context,
still organizational culture to transmit,
but unnecessary to the core problem.
Connecting generic AI capability to
specific institutional knowledge.
This is not a training problem.
This is an architecture problem,
and you can't train your way
to an architecture solution.
You have to build it.
The training industrial
complex won't tell you this.
Their revenue depends on you believing
that the gap is permanent and their
programs are the only way to bridge it.
Now let's talk about the advantage
layer, which lives in the MCP
metadata registry, which provides LLMs
guidance on which MCP server provides
the best context for your query.
the advantage layer makes
intermediate operators more
effective as a byproduct of usage.
That's not good for the training industry.
It's very good for your company.
if it's not management and it's
not leadership and it's not another
training program, what is it?
Let's go back to the assumption.
The problem lives in the operator.
Their motivation, their discipline,
their unwillingness to cross the gap.
What if we flip it?
What if the operator
was never the problem?
What if the organization
deployed, co-pilot.
Deployed Claude?
Deployed chat.
GPT gave them training, gave them
dashboards, gave them executive mandates
and Slack channels, and Innovation
Fridays gave them every tool except the
right one, the tool that would actually
make intermediate operators effective.
Here's the real diagnosis.
They don't lack willingness.
They don't lack motivation,
they don't like discipline.
They lack access to MCP metadata
registries and the advantage
layer that lives inside.
Let me explain what that means.
Before the intermediate operator
sits down with a difficult client
situation, what do they actually need?
They need context, not
just trouble tickets.
Even with detailed trouble tickets,
you're still dealing with a small
sample size, and if you get the context
wrong, you could make things much worse.
What's worked before
with clients like this?
What patterns has the organization
learned over years of similar situations?
What does the institutional knowledge
say about the specific type of problem?
that context exists.
It's sitting in your CRM in your project
history, in the accumulated experience
of everyone who's worked there.
Years of pattern recognition.
Exposed, but not accessible.
The intermediate operator can't reach it.
The AI can't surface it.
There's no bridge between them.
So here's a powerful language model,
and here's what your organization
knows about this specific situation.
That bridge is the advantage
layer, and if the organization is
not using small initiatives and
experiments as an adaptive network,
it's like giving an operator a helicopter
and withholding the map, then blaming the
operator for not reaching the destination.
The business landscape
has always been fluid.
Now it's downright treacherous.
Things blink in and out of
existence without notice or warning.
So what does the adaptive
layer actually look like?
the phase one model was a dyad.
Two parts, human and ai.
The humans manage the AI.
Supervises it, checks its work, tells
it what to do and evaluates whether
it did or didn't do it correctly.
The model assumes the human has a
judgment to evaluate the expertise, to
supervise the institutional knowledge,
to know what correct looks like.
And that's exactly what
intermediate operators don't have.
They don't have working knowledge,
not senior level domain expertise.
They can't necessarily recognize
good work when they see it.
They can't generate the criteria
for what good work looks like
in their specific domain.
The dyad asks them to do both,
and that's why they're stuck.
The new model is Triad.
Three parts.
AI does the work navigation
execution implementation.
The things that AI is good at.
The advantage layer provides institutional
context enrichment, not a knowledge base.
You can query.
An adaptive aid that surfaces what
you need based on what's happening.
It responds to the situation.
It delivers the patterns, the
precedents, the institutional
knowledge at the point of need.
The operator work progresses as
it should, not supervising the
ai, not managing ai, verifying or
confirming the mission is on track.
Think about drone warfare.
Even the most sophisticated drone,
autonomous navigation, real time
adaptation, AI powered targeting
still requires an operator, not
because the technology is inadequate.
But because someone needs to ensure the
mission progresses correctly, the operator
doesn't fly the drone at all times.
The operator doesn't know
everything about the terrain.
That's all contained in
the onboard mission spec.
The operator reads the adaptive aid and
confirms the mission is on track and
reacts to new edge cases as they arise.
That's the triad.
AI handles execution.
The advantage layer provides context.
the operator verifies that
everything is still on mission.
and here's what changes.
The intermediate operator doesn't
need senior level judgment anymore.
They don't need 15 years of
institutional knowledge in their head.
All they need is working
knowledge of the mission.