Introduction

AI adoption is the most predictable failure pattern in technology programmes today, and the failure has nothing to do with AI. Across the engagements I have run and seen over the past four years, the same shape recurs. The pilot works. The model performs above baseline. The use case is real and the business sponsor is enthusiastic. Then the rollout stalls. Six months later the production system is in a holding pattern, the sponsor has moved on, and the AI team is reporting to a steering committee that has no authority to change anything. The model still works. The institution does not. I have come to believe that AI adoption is governance work, not procurement. The interesting question is not which model to buy. The interesting question is who decides what the AI is allowed to recommend, who is accountable when it is wrong, and how the human work changes around it.

Why pilots succeed and rollouts fail

Pilots succeed because they are bounded. A controlled scope, a curated dataset, a clear sponsor with the authority to make decisions inside the boundary, and a team focused enough to make the model behave. Rollouts fail because the boundary disappears. The data is messier, the sponsors are plural, the decision rights are unclear, and the team that built the pilot has neither the authority nor the bandwidth to redesign the operating model around the deployment. Pilots prove the technology. Rollouts prove the institution. Most programmes have not done the institutional work, so most rollouts collapse quietly — in a way that looks like a delivery problem and is in fact a governance problem. The cleanest way to know whether a rollout will land is to ask whether the governance was designed before procurement closed.

Five questions governance has to answer before procurement

  • Who decides what this AI is allowed to recommend, and on what basis? Without explicit authority, every recommendation becomes a debate.
  • Who is accountable when the AI is wrong? If the answer is nobody, the organisation will not deploy the system to consequential decisions.
  • How are outputs audited? If the audit trail does not exist, the system cannot be defended to a regulator, a board member, or an angry customer.
  • How does the human work change around the AI? If the answer is 'unchanged', the AI will be ignored or overridden, and the productivity gain will not materialise.
  • How is the model maintained, retrained, and retired? If the answer is 'we will figure that out later', the institutional debt clock has already started.

Two AP-automation rollouts, twelve months apart

Two of my clients ran AP automation programmes within twelve months of each other. The technology was almost identical: an OCR-plus-LLM workflow for invoice processing, with human review for high-value or ambiguous cases. The pilots produced indistinguishable results. The rollouts diverged completely. The first client had spent six weeks answering the five questions above before procurement. They defined which invoice categories the bot was allowed to approve unsupervised, which required a single human reviewer, which required a second sign-off, and which the bot was forbidden to touch. They named the accountability path when the bot got something wrong. They published the audit trail to internal audit on day one. Twelve months in, the system processes roughly seventy per cent of invoices unsupervised, the AP team has been repositioned to vendor management, and finance reports the change as a structural improvement. The second client treated the rollout as a delivery project. Procurement closed. The bot was deployed. The first time it misclassified a forty-thousand-dollar invoice, three executives spent four meetings arguing about who should have caught it. Six months later, the bot was running but bypassed by eighty per cent of the AP team, and the programme was deemed a disappointment. Same technology. Different institutional architecture. Two completely different outcomes. The difference cost the first client six weeks of design and saved them at least a year of paying down institutional debt.

An old line worth repeating

"The technology is ready. The institution rarely is."

— said by me too often to clients to take pleasure in being right

The procurement trap

Procurement is not a bad function; it is the wrong starting point. When AI adoption begins in procurement, the conversation is about tools — which vendor, which platform, which model. These are valid questions in the middle of the process. They are catastrophic as the beginning. The starting point is governance: who decides, who is accountable, how the work changes. Procurement converts those decisions into purchase orders. When procurement leads, the purchase order arrives and the decisions are still unmade. The vendor cannot answer them — they are not in your building, they do not run your operating model, and they have no authority over your decision rights. The institution has to answer them itself, before procurement closes. Anything else is institutional debt the next leadership team will have to clean up.

Three governance patterns that work

  • Tiered authority on AI outputs. Define explicitly which outputs are unsupervised, which require single human review, which require two-signature review, and which the system is not allowed to produce. Tier the system to the risk.
  • A standing review forum with teeth. A monthly forum that audits AI outputs in production, with a defined escalation path and the authority to roll back deployments. Without teeth, the forum is theatre.
  • Sunset clauses on model deployments. Every AI deployment carries a documented review date. By the date, the model is either renewed, retrained, or retired. Default-on AI is institutional debt waiting to happen.

Where most organisations get it wrong

  • They start with procurement, hoping the vendor will define the governance. The vendor cannot.
  • They assign AI ethics to a working group with no operational authority. Ethics work needs teeth.
  • They treat the AI ethics review as a one-time approval gate, not an ongoing operational discipline.
  • They confuse explainability with accountability. A model that can be explained but has no named owner is still ungoverned.
  • They build the technology and then ask change management to roll it out. Change management cannot fix governance gaps after the fact.

Closing

AI is the technology that punishes governance gaps most quickly, because the consequences scale fast and the inputs are unfamiliar. The organisations that scale AI successfully in the GCC over the next five years will not be the ones with the largest model budgets. They will be the ones whose leadership treated AI adoption as institutional design from the first conversation. The procurement bill will be the smallest line item in the post-mortem. The institutional architecture will be the line item that explains why the programme worked. This is the same lesson digital transformation has been teaching for two decades. AI just makes the lesson harder to ignore. If you are running an AI programme right now and you have not answered the five questions in writing, the cleanest thing you can do this week is stop the procurement until you have. Six weeks of governance design is the cheapest insurance against a year of cleanup. The institutions that have learned this are the ones whose AI programmes still exist.