ChatKit don't over react
E5

ChatKit don't over react

Did OpenAI just make your
competitive advantage obsolete?

Again?

Recently they launched chat kit,
a developer tool kit that turns

what used to be a six week custom
AI chat implementation into a

six hour plug and play solution
with testing and write on queue.

AI agencies are panicking about what
this means for their business models.

Here's the pattern.

You've probably seen it a thousand times.

Major AI company releases new capability.

AI agencies scramble to
figure out the implications.

Everyone debates whether this is
good or bad for their business.

Seven months later, the cycle repeats.

This time the screams are much louder
because the capabilities have doubled.

Stop me if you've heard this before.

The real problem isn't chat kit.

The problem is agencies
are playing a game.

They cannot win.

The math is the math, and the
problems are growing exponentially.

Plainly put AI capabilities
double every seven months.

Traditional competitive advantages
decay within 18 to 24 months, Depending

on how far you are into the current
AI capability cycle, your ultra unique

implementation may become table stakes
before your next invoice gets paid.

While everyone's debating chat kits
impact on their business model, they're

missing the fundamental question, how do
you build competitive positions that keep

pace with the rate of AI improvement?

Just so we're on the same page.

Positioning is a unique
space in the customer's mind

that your company occupies.

It's how your organization is
perceived, your distinct identity,

reputation, and the specific job your
company is routinely hired to do.

Why this matters is because when
you change your offer, because

of a new AI capability, your
position can be negative impacted.

When we talk about enduring advantage,
we're talking about competitive

positioning that compounds over time
through value density strategies and

intelligent feedback loops, creating
barriers that become mathematically

insurmountable as the advantage
accelerates due to the introductions

of new or deepening AI capabilities.

We're not talking about temporary
wins, not this quarter's edge.

By shooting for enduring advantage, we're
talking about positioning that strengthens

automatically while competitors chase
the next shiny object and powered by

some new capability or clutch their
pearls, wondering how they're going

to adjust to today's new reality.

Let's backtrack a bit and say
that contrary to popular belief

in a time of ai, abundance speed
isn't just about response time.

It also includes the complexity of
the challenges that can be addressed.

Bear with me, normally, speed is measured
by how quickly you get an answer from

the system, and this still matters.

However, as AI capabilities
increase speed is more about how

fast complex projects complete.

We're fast moving beyond
simple AI chat bots.

Andros, Claude, LLM, just release a
major new capability called skills.

So you can think of skills as a set of
custom instructions that allow you to

teach Claude how to handle special tasks.

Now imagine that capability in
seven months when it doubles.

Okay, back to the finally.

Speed also measures how much time
an LLM or foundation model can work

unsupervised on a complex problem.

So people can take weeks to pull
together a report even if they know

where the parts and pieces are located.

AI may be able to do it in an
afternoon and start a second

one without any rest or delay.

It always bothers me when people use
terms that they don't explain later.

So let's discuss foundation models.

They are broad AI escape models
designed for versatility and LLMs,

which are specific type of foundation
model based on language . Foundations

models can be trained on text, images,
audio and video, making it useful

for variety of downstream tasks.

LLMs, on the other hand, are
trained exclusively on large text

databases and excel at understanding
and generating human-like text.

Now let's do an open AI reality check.

let's be clear about what Chat kit
actually represents and why the reflexive

panic misses the point entirely.

What chat kit does?

Chat kit handles the backend
infrastructure for AI chat experiences,

authentication message streaming
conversation management rate limiting

all the technical plumbing that
used to require custom development.

What makes it significant?

It's not revolutionary technology.

What it does do is commoditizes your
five figure custom chat implementation,

turning what used to take months
into a configuration exercise.

I understand the panic because AI
just blew a five figure hole in

your revenue stream per project.

But here's what everyone's missing.

Chat kit isn't the threat.

Chat kit is simply a symptom of
the actual threat, the systematic

commoditization of every AI
capability that can be standardized.

News alert.

This commoditization is accelerating.

So let's do a back of
the envelope calculation.

Everything in the AI ecosystem
is doubling in capability every

seven months, not just the LLM or
the foundation models, everything.

It's the tools and the databases as well
as especially tools like vector databases.

So it's easy to get outta sync
and miss a capabilities windows.

This means that your 18 month
advantage window could shrink to

approximately three to four months
before competitors can replicate it

using better tools and doing a cheaper.

This is not a business model.

It's a treadmill.

Now let's talk about what I like to
call the chicken little cycle, which

is really gripping the AI agency world.

For the record, this
panic cycle isn't new.

It's predictable and it's becoming boring.

The pattern: Capability launch,
Open ai Anthropic are Google

releases some new functionality.

Agencies panic.

They say, what does this
mean for our business?

There's market confusion?

Clients ask, do they need to switch
strategies or hire a new vendor?

Rush the implementations, agencies
scramble to add the new capability or

launch new strategies, temporary relief,
brief period of competitive positioning.

Oh, they love it, but
it's not even halftime.

Next launch, repeat entire
cycle seven months later.

Why does this cycle exist?

Most agencies have allowed the
response to AI capability advancement

to creep onto their critical path.

In this case, what I mean by
critical path is the steps that are

important to the agency's survival.

If they are delayed, it could sink the
organization, or at the very least,

have a lasting impact on its viability.

When you allow AI capability advancement
to creep onto your critical path,

you're guaranteeing constant disruption
and detours because of what I like

to call the capabilities avalanche.

The capabilities avalanche is the idea
that the pressure from new capabilities

will continue to grow until they
spill over like an avalanche bearing

everything else on your company's agenda.

Here's the strategic error.

Treating each new AI release as
a strategic decision point rather

than an expected variable that's
already priced into your planning.

It's like a logistics company.

Treating traffic patterns as unexpected
disruptions instead of predictable

variables you can optimize around.

don't get me wrong, you also can't
ignore the advances in capability.

This is called tech debt and creates
its own set of negative circumstances.

For the record tech debt is a
phenomenon in which minor changes

Zachary Alexander: cost Exorbitant
amounts to fix because the organization

hasn't made the incremental
changes needed to keep costs down.

I mean, there's always a certain
amount of maintenance that need

to be applied to these systems.

If you don't do it, you end up
with tech debt so that just a

minor thing can be way blown out of
proportion from a cost standpoint.

So instead of changing a widget, you end
up having to basically bring in whole new

systems because you haven't made those
incremental changes necessary to keep up.

Tech debt can also result from
the capabilities avalanche.

Companies should at least incorporate
some of the changes that come by way of

the capabilities of avalanche.

Companies should at least incorporate
some of the capabilities upgrades

so that they don't end up having
to devote a significant number

of funds to get back on track.

Plus.

You lose the window for experimentation,
and I think this is the thing that

most people miss, and this is really
important, the window of experimentation,

which also increases your cost.

Companies in this situation get around
to investing the necessary funds.

They can't afford the time
necessary to set up a sandbox

and let the solution burn in.

How do end the cycle?

Take AI capability advancement
off your critical path, stop being

surprised by the foreseeable build
positioning that assumes continuous

exponential improvement AI capabilities.

Strategically, this requires a fundamental
shift from competitive advantages based

on static capabilities to enduring
advantage based on value density.

Value density means adopting a philosophy
that embraces the packing of as much

impact into every single AI prompt.

You can also think of it as good
things come in small prompts.

Here's an aside for the tech-minded
folks among us, otherwise

labeled the Shad CN Insight.

While agencies worry about chat
kit commoditizing backend chat

infrastructure, they're overlooking
the more strategically significant

development Shad CN AI elements.

What Shad CN AI elements
provides is a consistent.

Production ready UI components
specifically designed for AI interaction.

Prebuilt interfaces that
integrate seamlessly with

existing Shad CN design systems.

Why does this matter?

Your parents probably told you
not to judge a book by its cover.

They also told you always
make a good first impression.

In AI implementations, the
UI is the first impression.

Chat kitt handles the backend technical
plumbing important but invisible to users.

Shad CN AI elements handles
the user experience.

The actual interface, which
your client's customers interact

with the AI capabilities.

This is where you make your money.

You can have terrible backend
performance in the beginning.

As long as it looks professional,
people will give you a pass.

They just assume, well, you know,
you're just working out the bugs.

If it looks bad, everybody
can weigh in on it.

Everybody feels that they have
an opinion on how things look.

They feel a lot more comfortable being
able to comment on the way it looks.

Nobody comments on the backend stuff.

They just say, well, you know, maybe
I just don't know what's going on.

Maybe this is, this is
how it's supposed to be.

Maybe the technology just hasn't caught
up, but they know when it looks bad.

The strategic distinction, chat
Kit, commoditizes, technical

Implementation, table Stakes.

Shad cn AI Elements enables consistent
professional user experience,

differentiation, opportunity.

But here's the deeper insight.

Neither chat kit nor Shad cn
AI elements is the answer to

building enduring advantage.

They're both tools that will be
commoditized within 12 to 18 months.

The real opportunity isn't
in the tools themself.

It's in the understanding how to
create enduring advantage using

these tools in ways that compound
over time rather than decay.

That's the distinction between
playing with the tools and

building enduring advantage, the
Enduring Advantage framework.

So what's it actually mean to
fixate on enduring advantage?

And more importantly, how do we
measure whether you're building

it enduring advantage definition.

Let's start there.

It is competitive positioning that
compounds over time through valued entity

strategies and intelligent feedback
loops, creating barriers to imitation

that become mathematically insurmountable
as the advantage accelerates

Let's talk about the
three critical components.

Number one, valued density
methodologies . This isn't about doing

more with ai, it's about creating
exponentially more value, where every

implementation generates insights,
capabilities, and strategic options

that exceed the initial investment.

Example, an AI chat implementation.

It doesn't just handle customer
service and answer your question,

but also generates product insights.

Competitive, intelligent market trend
analysis and customer satisfaction.

Predictions to inform strategy
across multiple departments

without a lot of hassle.

It just basically happens.

It's a configuration issue.

Not a, let's set up a committee to
figure out what needs to happen.

You know, basically you turn it on and all
this stuff just flows through and outputs

to your screen the output needs to be
exponentially greater than the input, not

just in efficiency, but in completeness.

Okay?

So number two, intelligent feedback loop
systems that improve automatically through

use, becoming more valuable over time
without linear increases in management

effort, so you don't have to do nothing.

An example, customer service AI
learns from every interaction,

automatically improving response quality,
identifying new service opportunities,

and optimizing for outcomes that
weren't even explicitly programmed.

The advantage isn't in the
initial implementation.

It's in the accumulated learning
that competitors can't replicate

even with identical technology.

You know, back in the day, we used
to call this tacit knowledge, and

it's why we need people in the loop
for the foreseeable future because

not all knowledge is written down.

There's stuff that people just
have a feeling about that they just

know because it's muscle memory.

That's tacit knowledge.

Number three, barriers to imitation.

The world has changed and proprietary
technology no longer holds its value.

The game will be won or lost based
on relevant accumulated context.

It will be won based on the
ability to get the right chunk

of information when it's needed.

Without getting too deep, there
is a term called context rot.

This happens when AI prompts
include too much context.

Just as a heads up, the term is already
being used in strategic discussions about

AI systems design and prompt engineering.

So context rot occurs when
excessive contextual information

is included in prompts to AI models
causing several negative outcomes.

Number one, degraded, prompt,
quality when excessive or

irrelevant, context is included.

Large language models often struggle
to figure out what matters or even what

you're looking for that get confused and
produce unreliable or erratic outputs.

Even simple tasks can fail if
too many distractors are present.

This confusion is frequently
cited in technical write-ups about

context rot and prompt engineering.

Number two, reduce system performance.

Larger, more complex prompts force
the AI to process more information.

Now, what they don't tell you is
that AI systems have memories and

they have memory windows, and if
you fill up the memory window with

too much context, then things fail.

You don't get the information that's
really needed because irrelevant stuff

or conflicting stuff could end up in
the context window before the stuff we

really need in order to complete the task.

The model may need to scan and analyze all
contexts to find relevant bits, slowing

down outputs and raising resource costs.

Strategic disadvantage, outdated,
irrelevant, and bloated context, make

systems harder to maintain and optimize.

As the context window fills up,
attention spreads thin model reasoning

drops in quality and updating
core logic becomes more difficult.

This leads to failing features,
rising operational costs, and

the loss of strategic clarity.

It's Why we talk about good
things coming, small prompts.

Most strategic thinking breaks down.

Without the back of the envelope
calculation enduring advantage

becomes an abstract aspiration
rather than a measurable reality.

So, the traditional AI agency model:
Sell AI implementation services.

Build custom solution using
the latest capabilities.

Clients get efficiency improvements.

The advantage decay as
capabilities commoditize.

Repeat sales cycle every seven months.

The value density agency model:
design implementations that

create compound intelligence.

Build feedback loops that
improved automatically.

Client gains a strategic positioning
that strengthens over time.

Network effects multiply
value across client base.

And enduring advantage increases with
each AI capability doubling cycle

So here's my reality
check for the coming year.

We will see many more
advances beyond chat kit.

And yes, there will be more Shad CN
style tools that turn current technology

advantages into configuration exercises.

The agencies that survive and
thrive won't be the ones with the

best implementation capabilities.

Those capabilities will be
commoditized faster than you

can build a market for them.

The agencies that dominate will be the
ones that build enduring advantage.

Design competitive positioning
that strengthens over time.

Implement intelligent feedback loops that
compound into competitive advantages.

Accelerate learning velocity that
outpaces the capability avalanche.

Establish a strategic network that
multiplies advantage across clients.

The question isn't whether a value
density agency model will be successful.

The exponential mathematics
and middle market demand will

validate that automatically.

The question is who will build it
first and become the category leader?

Will it be your agency, or are
you still panicking about the next

capability release while someone else
is building enduring advantages that

become mathematically insurmountable?

The choice is yours, but the window is
closing faster than most agency realize.