← Back to Articles

Designing MAIA: The Spine of Intent in Complex Systems

MAIA didn’t come out of a clean design exercise.

It didn’t fall out of a workshop where everyone agreed on a neat diagram, wrote a spec, and walked away feeling accomplished. It came out of something much messier: the moment I realized everything I was building on top of LLMs was moving — but nothing had a spine.

The more I wired models to tools, prompts to chains, chains to more chains, the more one question got louder in the back of my head:

What is this thing actually trying to do?

Not, “what’s the next step?”
Not, “what’s the current task?”
But, “what is the actual purpose underneath all of this?”

That question is where MAIA came from.

Not as a product.
Not as a brand.
As a necessity.


When “Doing Stuff” Stopped Being Enough

In the early prototypes — before the math, before RFS had a name, before AIVA was even a phrase in my vocabulary — I was doing what everyone else was doing:

  • wire the LLM to a tool,
  • feed the tool output into another prompt,
  • call another model,
  • throw in some reflection,
  • add branching,
  • call that “orchestration.”

On the surface, it looked like progress. I could demo it. Things happened. Tasks got “handled.”

Underneath, it was chaos.

The patterns were obvious once I stopped pretending they weren’t:

  • the system would solve the wrong problem because it latched onto the wrong detail,
  • it would wander off into side quests instead of finishing the real task,
  • it would hallucinate a goal that sounded smart but wasn’t mine,
  • it would jump context with no sense of continuity,
  • and whenever reality deviated from the happy path, the whole thing lost its grip.

I wasn’t orchestrating anything.
I was begging a probability engine to behave like a decision-maker.

I could tune prompts.
I could layer on “agent patterns.”
I could build more elaborate chains.

None of that touched the root problem:

There was no stable representation of what the system was actually trying to do.

There was motion, but no intent.


Why Intent Is Not a Prompt Problem

The common answer you hear is:
“Just describe the goal better in the prompt.”

That’s the software equivalent of telling someone in a burning building to “try harder to stay calm.”

It sounds like help.
It isn’t.

Prompt-level “intent” is tissue paper. It dissolves the moment:

  • the user changes what they care about,
  • a tool returns something unexpected,
  • internal state shifts,
  • the context window scrolls and memory vanishes,
  • or the model decides something else looks more “interesting.”

Real intent is not a line of text.

Real intent is a stateful object.

It needs to:

  • carry direction,
  • encode constraints,
  • remember relevant context,
  • know what is out-of-bounds,
  • understand priority and hierarchy,
  • and persist across calls, tools, and time.

You can’t “re-prompt” that into existence on every call and pretend you have architecture. That isn’t systems design. That’s cosplay.

Eventually I had to admit:

If intent doesn’t have a home in the system, everything else will always feel like chaos pretending to be structure.

That admission is where MAIA started.


What MAIA Actually Is (In My Words)

MAIA is the part of the system that knows why we’re doing anything at all.

Not in marketing language. Not in vague “mission” terms. In code and state.

Practically, MAIA holds:

  • What we’re doing right now (current task / plan),
  • Why we’re doing it (underlying intent),
  • How far we’re allowed to go (scope and depth),
  • What constraints we’re under (safety, business, technical),
  • What memory is relevant here (not every fact, the right ones),
  • What success looks like in this context,
  • What is off-limits, no matter how “clever” the model gets.

And the key: MAIA persists.

The system doesn’t wake up on every call and reconstruct its “goal” from scratch. It doesn’t forget what it was trying to achieve the moment the token window moves. MAIA holds the through-line while everything else — tools, models, sub-tasks — spins underneath it.

When I say “MAIA is the spine of intent,” I’m not being poetic.

Without that spine, the system can move.
It just can’t stand.


The Human Parallel: Two Teenagers, One Intent

The clearest mental model for MAIA didn’t come from the lab. It came from my house.

Two teenagers.
Two very different internal architectures.

One is more external, direct, logic-forward.
The other is more internal, emotional, intuitive.

If I approach both of them with the exact same interface — same tone, same intensity, same timing — I will almost guarantee friction with at least one of them.

What I had to learn (often painfully) was this:

  • My intent has to be stable.
  • My interface has to be adaptive.

The intent is consistent:

  • keep them safe,
  • help them grow,
  • teach them to think,
  • stabilize the environment around them,
  • give them a foundation that’s stronger than whatever chaos they’re going to face.

But the expression of that intent changes based on who I’m with and what state they’re in.

Same spine.
Different surfaces.

That’s MAIA.

The system shouldn’t change what it wants just because one model is more “creative” and another more “literal,” or because one tool is brittle and another robust. The intent object stays stable. What changes is how that intent expresses:

  • which tools are allowed,
  • which paths are in-bounds,
  • how aggressive or conservative we are,
  • how we interpret ambiguous input.

Two very different children.
One coherent intent.

Multiple heterogeneous components.
One coherent MAIA.

Once that clicked in my life as a father, it became obvious what MAIA needed to be in the architecture.


Where MAIA Lives: RFS, AIVA, TAI

MAIA doesn’t sit alone in a vacuum. It lives inside a stack:

  • RFS (Resonant Field Storage)
    Gives us a memory substrate that doesn’t evaporate. MAIA can attach to real continuity — not just token history.

  • AIVA Execution Fabric
    Provides deterministic execution. MAIA isn’t just a nice object sitting on the side; its intent constrains what the fabric is allowed to do.

  • TAI / CAIO
    Act as the cognitive OS. They orchestrate perception, memory, tools, and models — but always under MAIA’s guidance about what we’re actually trying to accomplish and where the edges are.

We’re still building all of this. It’s not finished. It’s not polished. It’s not a SaaS landing page.

But the shape is clear:

  • Without MAIA, the system drifts.
  • With MAIA, the system has a center of gravity.

Why I Won’t Build Without Intent Anymore

I’ve seen what happens when systems — technical, organizational, human — run without clear intent.

  • Teams work insanely hard and go nowhere.
  • Products ship features that look impressive and solve nothing.
  • Architectures accumulate random helpers, patches, and hacks until nobody can explain why anything exists.
  • AI systems “do stuff” but never reliably do the right thing for the right reason.
  • Kids feel managed instead of understood.

I’m done building like that.

If I’m going to invest real time, energy, and reputation into a system, it has to:

  • know what it’s doing,
  • know why it’s doing it,
  • know where the boundaries are,
  • and be structurally incapable of drifting too far from that spine without someone noticing.

That’s what MAIA is for.

It doesn’t make everything perfect.
It doesn’t prevent every failure.

But it removes an entire class of failure:
systems that move without knowing what they’re moving toward.

And at this point in my life, I’m not interested in systems without a spine.


Key Takeaways

  • MAIA wasn’t born from theory; it was forced into existence by the chaos of building on LLMs without real intent.
  • Intent is not a sentence in a prompt — it’s a persistent, structured object in the system.
  • MAIA is the spine of intent: it holds purpose steady while execution adapts.
  • The same pattern showed up in my house before it showed up in my math: stable intent, adaptive interfaces.
  • Any serious system needs a real home for intent. For mine, that home is MAIA.

Related

  • Building an AI System That Cannot Lie to You
  • System-Level Intelligence
  • The Mathematics of Intent
Designing MAIA: The Spine of Intent in Complex Systems | Philip Siniscalchi