value density case for AI Planning
E2

value density case for AI Planning

One of the things that most people
would agree on is that AI changes

everything, even planning, but what
most people don't realize is just how

fundamentally the changes really are.

Welcome to the Value Density podcast.

I'm your host, Zachary Alexander.

Planning used to be something that
took place on paper and resulting

it in something that looked more
like a bill of materials, that

something that you could iterate on.

You'd have project managers spending weeks
creating Gantt charts, resource allocation

spreadsheets, and milestone documents
that were essentially dead on arrival.

What I mean by this is that
all this material produced.

Very little context.

It mostly was busy work, designed
to impress the higher ups.

People could say, I must be working hard
because of all this paper I generated.

You can't interrogate the logic and
identify opportunities for improvement,

which you could think of as increasing
the completeness of the solution or value

density With all this paper, traditional
planning was like taking a photograph

of your intention and then pretending
that the photograph would guide you

through a constantly changing landscape.

It was static.

Rigid, fundamentally
disconnected from reality.

but here's what's changed for us,
basically everything, I think that today's

process is more like an experiment, a
sophisticated, intelligent experiment.

You start a conversation with
your favorite LLM, you know,

like what does success look like?

The thesis You ask the LLM
to brainstorm it with you.

Here's my big idea.

Here's what I think it
will take to accomplish it.

Then you ask the LLM what it thinks.

my big idea was how do I
create a mobile app to go along

with my self-service portal?

Think Microsoft SharePoint with
MCP, access to your well-known

OAuth protected resources, internal
digital assets and trade secrets.

Experience has shown the successful
production grade agent AI systems

resemble self-service portals.

Now, here's where it gets interesting.

Instead of spending three months writing
requirement documents that nobody

reads, I spent the afternoon having a
back and forth conversation with Claude

about the technical architecture,
user experience, and business logic.

But this wasn't just faster.

Now, you may say, well, what about
all the interaction with the users?

You know, why didn't you
go out and interview them?

Well, the reality is their job is not
to build ai production grade systems.

That's my job.

Their job is to function
as a subject matter expert

Their job is to keep the business
running and to do the things which

ensure the success of the business.

My job is to create production grade
AI systems, but this wasn't just

faster, it was fundamentally different.

Most people do better when they
can actually see something that

resembles what they're looking for.

You know, they're not good at
coming up with abstract ideas.

They need something tangible, something
they can click on, and that's what this

Zachary Alexander: planning
process will deliver in an

afternoon's worth of conversations.

The AI just wasn't transcribing my ideas.

It was improving them challenging
assumptions and suggesting

approaches I hadn't considered.

For example, when I described my vision
for an Agentic AI gateway, Claude

immediately understood what I was shooting
for and understood my security concerns.

We discussed the challenges of OAuth 2.1

plus PKCE or Pixie.

There was also the early mover risk.

MCP service security is a thing and the
community hasn't settled on a solution.

So I picked a standards based solution,
the one that everyone else is talking

about, you know, where agent, you know,
agent to agent kind of thing, I think

is basically a thinly veiled land grab.

I think it's a way of getting
us to use more tokens.

There's no way that any organization
of any size is going to ditch

its bespoke security solution.

For third party, one size
fits all convenience.

Let's make the LLM responsible for
securing all of our trade secrets

and create a single point of failure.

And if you crack that single
point of failure for one

company , then not only do you

get access to all the
companies in my industry.

you get access to all the companies
in everybody else's industries.

Everybody else is trying to get ahead.

We're gonna create the single point of
failure so bad people can attack them.

Let's provide you with little more
context about my agenda system is

that it used it as an MCP metadata
registry to improve tool selection.

You know what tends to happen
is that if you give an LLM.

40, 80 different tools from different MCP
servers, then it's gonna get this tool

selection process wrong 80% of the time.

That's just too many tools without
the necessary context, which is

provided by my MCP metadata registry.

Now, let me explain why this mattered.

Because it addresses one of the biggest
pet peeves about how organizations are

being struck, instructed to structure
their MCP service architecture.

One of my pet peeves is that all
so-called gurus view organizations flat.

And what that means is that there's
only one system per function.

HR has one system, finance
has another system.

Operations has a system, all of its own.

And everything lives in nice little boxes.

while this may be true of startups
and entrepreneurial organizations,

it's absolutely not true of middle
market companies and middle market

companies represent the biggest
opportunity for AI businesses today.

The reason is that middle
market companies are too big

to use off the shelf solutions.

You know, they got some skin in the game.

And they're too small
to create their own ais.

You know, they can't afford to go
out and spend $95 billion to create

their own LLM, you know, that's,
that's out of their price range.

These companies have either
bought or want to make it easy

to be acquired in the future.

They're dealing with legacy systems,
acquired technologies and competing

platforms that need to work together.

They likely have whole divisions that
aren't talking with other divisions.

Each has its own system and
is wildly protective of it.

The sparage costs are through the
roof, but absolutely no one is

gonna spend the time and effort
necessary to slay that dragon.

For the record, you know, spares
cross is the cost of maintaining

spares for a given system.

Think of printers.

That use different toners.

The cost of buying a different toner
is technically a, a disparage cost.

MCP metadata registries are silo busters.

They abstract away competing interests,
be they technical or political, Instead of

forcing companies to rip out and replace
their existing systems, they create

intelligent bridges that understand the
context and capabilities of each platform.

In future episodes, we will discuss
in detail how MCP registries work and

my proposal for federated MCP Registry
Framework, which I believe will become

the backbone of enterprise AI integration.

Back to the agent AI gateway.

I believe that because of Claude
Desktop and other agentic ai.

Agents, AI applications need to
have a desktop and mobile version.

When I ask how best to accomplish
this, Claude suggested flutter.

Now,

Here's the rub.

I'm using next JS and TypeScript
for my web server version,

which is a self-service portal.

Flutter uses DART the
programming language.

The question is, how do I manage the
development of one system with code

bases in two different languages?

This is where traditional planning
would have bailed on me A traditional

project manager would've seen it as a
requirement and marked it down as a TBD.

. Multiple code bases mean multiple
teams increase complexity, higher

costs, but AI driven planning
sees this as an opportunity.

The answer is, But the AI planning process
sees this as simply a straightforward

opportunity to answer the question.

the which is a monorepo.

For those who aren't familiar mono
repos are literally git repositories

that allow you to manage multiple
languages and share code where possible.

But here's what's fascinating.

When I explained this challenge to Claude,
it didn't just suggest more mono repos.

It provided a specific tooling
recommendation and a token system

that would provide a consitent
presentation layer across platforms

This is the difference between static
planning and AI enhanced planning.

Traditional planning would have
given me a list of requirements.

AI planning gave me a complete
architectural strategy with implementation

details, something that I could set up
and prototype in the same afternoon.

Next was a discussion about.

What design system to use Now, design
systems are a way of standardizing

the look and feel of applications.

It's a way of bringing in and
maintaining and managing something

like your branding and such.

I use Shad cn and until recently, Shazi
n hadn't had an official flood report.

A design system is a fancy way of saying.

All of the elements and colors
that go into the presentation.

choosing Shad CN in next JS
world is an easy decision.

On the other hand, the official
chadian port is a godsend.

There are literally thousands of UI
design systems and none of them translate

to from what's on Figma or what's
in the mobile space, and they don't

translate to next js, so this is awesome.

Being able to use the same design
system across multiple platforms will

strengthen the integrity of the project,
but it goes even further than that.

It goes to credibility.

Not every decision maker can
evaluate the difference between

one agent, AI app and another.

but they can see when a dev team
is paying attention to details and

producing professional looking results.

This is something that becomes
critical when you're dealing

with middle market companies.

These organizations often have
decision makers who aren't

technical, but are responsible for
approving technical investments.

Visual consistency, and professional
presentations become trust signals

that can make or break a deal.

One of the key elements of maintaining
a consistent professional looking

presentation layer is establishing a
single source of truth across design code.

And the best way to do this is to
stand up a token studio instance.

Token

Studio provides a design to code
pipeline right out of the box so

you can design something in Figma
and it will flow into your code

base . This is enormously important
for branding and brand management.

But here's where the AI planning
revolution really hit me.

When I was setting up Token Studio,
I realized that I wasn't just

building a design system, I was
building a scalable business model.

until now, we really haven't
talked about productized services.

But since we're going deeper and
products is becoming denser, we

have to talk about how we spread
the development across projects.

Think about it this way, every decision
I was making, the Monorepo structure,

the Design System choice, the Token
Studio implementation was creating

reusable assets that could be deployed
across multiple client projects.

One of the ways we do this
is with branding, potentially

white label branding.

So now we're talking about
something that could be of

interest to marketing agencies.

this is what planning looks like in
the AI generation Marketing agencies

are sitting on a gold mine right now.

Most of them don't even realize it.

Their clients are already experimenting
with AI tools Chat, GPT for content,

claw for analysis and various automation
platforms, but they're doing it in silos.

Without integration, without professional
implementation, Imagine walking

into a client meeting with a white
labeled AI gateway that integrates

with their existing CRM, maintains
their brand guidelines includes their

data sources, and can be deployed
in two weeks instead of six months.

This isn't theoretical.

It's the future