← Back to Articles

The Engineering Lie: "We'll Fix It Later"

If you’ve been in enough engineering rooms, you’ve heard the line that should make your stomach drop:

“We’ll fix it later.”

It sounds harmless.
Reasonable, even.
There’s pressure. There’s a deadline. There’s a customer demo. Of course we can’t solve everything now. Of course some things need to wait.

But here’s the thing most people won’t say out loud:

“We’ll fix it later” is almost never a scheduling decision. It’s a values decision.

I didn’t learn that from Twitter hot takes.
I learned it the hard way — selling into systems that were already breaking, watching teams ship architectures they knew were fragile, and then later, when I started building my own stack, feeling the exact same temptation crop up in my own head.

And that’s not me being dramatic — that’s what I kept running into at NTT, at AWS, inside t‑emgee, and then again when I started gluing early AI pieces together: people using “later” as a way to avoid telling the truth about what they were signing up for.

The older I get, the more intolerant I am of that lie — in code, in systems, and in my own life.


The Deal Where “Later” Came Due

One particular deal sticks with me.

Big enterprise.
Real money.
Lots of eyes on it.

We had an architecture on the table that everyone in the room knew was shaky:

  • fragile integrations,
  • a “temporary” data pipeline with no real governance,
  • an orchestrator that behaved as long as nothing unexpected happened,
  • no formal guarantees around behavior — just “it works in the lab.”

I pushed back.
Not because I wanted to be the heavy, but because I could see the failure modes:

  • backpressure that would ripple across the system,
  • inconsistent state between services,
  • support teams catching fallout they didn’t cause.

The room acknowledged the risk.
Everyone agreed the architecture “wasn’t ideal.”

And then the line landed:

“We’ll fix it later. Right now we just need to get this over the line.”

On paper, it made sense:

  • the quarter needed the revenue,
  • the customer wanted to move,
  • the team was tired,
  • the refactor looked big.

So we shipped.

You already know what happened.

Under real load:

  • the “temporary” components became permanent fixtures,
  • data quality problems quietly undermined trust,
  • the orchestrator hit weird edge cases nobody had time to reason about,
  • support teams were firefighting a system that had never been structurally sound.

We wrote postmortems.
We added monitoring.
We patched.

But you don’t patch away the fact that the system was born on a lie.

On the flight home from one of the incident reviews, I wrote:

“Every time I nod at ‘we’ll fix it later,’ I’m signing my name under a future apology I already know I’ll have to make.”

I haven’t been able to unsee that since.


Why “Later” Almost Never Shows Up

Let’s be honest: later almost never comes.

Not in enterprises.
Not in startups.
Not in our personal lives.

Here’s why:

  • Success cements the lie.
    If the fragile thing behaves just well enough to show progress, it becomes part of the critical path. Refactoring it gets riskier, not easier.

  • Incentives reward momentum, not integrity.
    People get promoted for shipping, not for quietly fixing the foundations before anything breaks.

  • New emergencies always show up.
    There’s always another feature, another deal, another fire. The debt you agreed to “fix later” keeps getting refinanced into the next quarter.

By the time you finally have the air cover to address it, you’re rewriting something under pressure with users and revenue sitting on top of it.

In AI systems, the stakes are even stranger:

  • models shift,
  • prompts mutate,
  • data drifts,
  • behavior changes without a clear audit trail.

“We’ll tighten it up later” in that environment turns into:

  • untraceable decisions,
  • unprovable behavior,
  • explanations that sound like stories instead of guarantees.

That’s why AIDF and the Mathematical Autopsy process exist in my world:

  • they force you to codify guarantees up front,
  • they make “later” explicit — as math, not wishful thinking,
  • they gate what you’re allowed to ship until the architecture can defend itself.

If we’re honest, “we’ll fix it later” is almost always a bet that the blast radius will land on someone else.

That’s the part I’m done with.


How This Shows Up in AI Stacks (And Why It’s Worse Here)

In AI work, this lie shows up in more subtle ways:

  • “We’ll add real memory later; for now, a vector DB is fine.”
  • “We’ll formalize governance later; for now, we have some prompts and a policy doc.”
  • “We’ll re‑architect the orchestrator later; for now, the flow works in tests.”
  • “We’ll prove behavior later; for now, the evaluation numbers look good.”

The problem is that these “later”s stack:

  • you build retrieval masquerading as memory,
  • on top of an orchestrator masquerading as governance,
  • on top of models masquerading as stable behavior.

Then you hand it to real users and act surprised when:

  • the system contradicts itself,
  • latent bugs turn into trust‑destroying edge cases,
  • you can’t explain why something happened.

My response to that has been to build a stack that structurally resists “we’ll fix it later”:

  • RFS + NME — memory as a field with traits and governance, not just a vector index you promise to clean up.
  • MAIA + VEE — explicit intent and RL‑backed policies, so you can’t pretend “the model will figure it out later.”
  • AIDF + MA — behavior defined and proved up front, instead of retrofitted after incidents.
  • LQL + LEF — plans and execution as math (DAGs and particles) instead of ad‑hoc flows.
  • CAIO — contract‑driven orchestration that refuses to route around invariants for convenience.
  • TAI — an assistant built on top of all that, not on top of vibes.

The point isn’t to make things perfect.
The point is to make it very hard to smuggle “we’ll fix it later” into the foundation without writing it down.


The Personal Version of the Same Lie

I wish I could say I only saw this out there.
The truth is, I saw it in myself too.

There were long periods where my internal script sounded like:

  • “I’ll sleep properly later, once things calm down.”
  • “I’ll get my health together later, once this project ships.”
  • “I’ll be more patient with the kids later, once this phase is over.”
  • “I’ll make space to think deeply later, once the current fires are handled.”

Later never showed up.

The cost was:

  • a nervous system running too hot,
  • a house that felt more volatile than it needed to,
  • less clarity in architecture because my body was overloaded,
  • more regret than I want to admit.

Eventually, the same principle I applied to systems came home:

If the system only works when you lie to yourself about “later,” it’s a bad system.

So I started treating my own life with the same rigor:

  • walking six miles a day wasn’t “health later,” it was new infrastructure now,
  • being honest with my kids about capacity wasn’t “emotional work later,” it was governance now,
  • saying no to work when we were close to the edge wasn’t “opportunity loss,” it was protecting invariants.

Once you see that “I’ll fix myself later” is the same lie as “we’ll fix the architecture later,” it gets harder to tolerate.


Where This Leaves Us

If you’re in a room and you hear “we’ll fix it later,” you should treat it like a fire alarm.

Ask:

  • What exactly are we deferring?
  • What is the blast radius if we never come back to this?
  • Who pays the price if we’re wrong?
  • What would it look like to design this so we don’t need “later” as a crutch?

Sometimes deferral is necessary.
Constraints are real.
But there’s a difference between:

  • honest sequencing — “we’ll do X now and Y next quarter, and here’s how we protect ourselves in the meantime,”
    and
  • wishful thinking — “we know this is wrong, but we hope it doesn’t blow up before we’re gone.”

The stack I’m building — AIDF, MA, RFS, NME, MAIA, LQL, LEF, CAIO, VFE, VEE, AIOS, AIVA, TAI — is my attempt to build a world where “we’ll fix it later” has less room to hide.

In my life, the rule is simpler:

If I already know it’s wrong and I’m just hoping to dodge the consequences, I fix it now or admit I’m choosing the damage.

Anything else is lying — to clients, to teams, to my kids, to myself.

I’m done with that.


Key Takeaways

  • “We’ll fix it later” is usually not a time statement; it’s a values statement about what you’re willing to ship knowing the risks.
  • In practice, “later” almost never arrives — success cements fragile architecture, incentives reward momentum, and new emergencies take priority.
  • The AI stack I’m building (AIDF, MA, RFS, NME, MAIA, LQL, LEF, CAIO, VFE, VEE, AIOS, AIVA, TAI) is designed to make it hard to smuggle unproven behavior and fake memory into production.
  • The same lie shows up personally (“I’ll take care of myself later”), and it leads to fragile lives and families that only work when nothing goes wrong.
  • Honest sequencing is different from wishful thinking: you either protect against the deferred risk or admit you’re choosing a future failure.
  • Refusing the “we’ll fix it later” lie is less about perfection and more about responsibility to the people who will live with the consequences.

Related Articles

  • Proving Behavior: Why AI Needs Mathematical Guarantees
  • Why Software Is Failing — And How Math Can Save It
  • The Real Reason I Refuse to Build Fragile Systems
  • Protecting Stability — At Home and In Systems
  • What Engineering Looks Like When You Refuse Vibe-Code
The Engineering Lie: "We'll Fix It Later" | Philip Siniscalchi