Most enterprise AI projects don't fail because of bad models. They fail because of decisions made six months before the first line of model code was written.
The pattern is consistent. A business unit identifies an opportunity. A strategy firm produces a deck. An internal champion presents it to leadership. Budget is approved. A development team is assembled. And then, somewhere between the approval and the first sprint, the architecture question is quietly skipped.
The question that doesn't get asked
"How will this system connect to your existing data?"
It sounds obvious. But in the enthusiasm of a well-received strategy deck, nobody asks it clearly enough — and nobody answers it honestly. The result is a model trained on the wrong data, integrated into the wrong layer of the stack, and deployed into an organisation that was never prepared to operate it.
This isn't a technology failure. It's an architecture failure that happened before any technology was chosen.
What good looks like
A well-architected AI project starts with three uncomfortable questions:
What data do you actually have — not what you think you have? Enterprise data is almost never in the format a model needs. It lives in legacy systems, siloed databases, and formats that predate the current IT team. Mapping this reality is unglamorous, slow, and entirely necessary.
What does the system need to do when it's wrong? Every AI system produces incorrect outputs. The architecture question is not "how do we make it accurate enough" — it's "what happens when it's wrong, and who is responsible?" In healthcare, this is a patient safety question. In finance, it's a compliance question. In logistics, it's an operations question. The answer shapes the entire integration design.
Who owns it after the project ends? AI systems degrade. Models drift. Data pipelines break. If there is no owner with the technical capability to maintain the system, you're not building a product — you're building a time-limited proof of concept with a production price tag.
The cost of skipping architecture
We've reviewed systems where the model itself was excellent — well-chosen, well-trained, producing genuinely useful outputs in isolation. But they were integrated into the enterprise stack in a way that made them fragile, opaque, and impossible to maintain without the original vendor.
The cost isn't the failed project. The cost is the two years of organisational credibility spent on it, and the resulting scepticism that makes the next, better-scoped project harder to fund.
What to do instead
Before any model selection, before any vendor conversation, before any budget request: map the data reality, define the failure modes, and identify the internal owners. If those three things can't be done, the project isn't ready — and the honest answer is to say so.
That's not a comfortable position for a consultant to take. But it's the only one that produces systems that work.