The Speed to Rewire

Why AI transformation now belongs on the CEO agenda – and why the decisive advantage will be human, not merely technical

The argument

Over the past few years, the AI conversation in business has moved through three distinct phases. The first was fascination: generative AI as an extraordinary instrument for writing, searching, summarising and coding. The second was experimentation: pilots, sandboxes, copilots, innovation days and executive theatre. The third phase, the one now arriving, is more serious. AI is becoming a test of organisational speed. Not speed as haste, not speed as uncontrolled adoption, and certainly not speed as buying every fashionable tool in the market. I mean the deeper speed of an enterprise: the capacity to sense change, decide intelligently, redesign work, govern risk and learn faster than the environment is changing around it.

This matters particularly for large multinational enterprises carrying accumulated technical debt in tools, infrastructure, data and management practice. Many of these organisations were built for scale, resilience and control, not for continuous recomposition. Their ERP estates, data lakes, cloud migrations, procurement cycles, cyber controls, risk committees and legacy applications were not designed for a world in which intelligence is becoming embedded in every workflow. AI has exposed what was already true: the limiting factor is rarely the model. The limiting factor is the operating model.

This is the point I have tried to make in my recent writing on Generative AI and Agentic AI. The interesting question is no longer whether a model can produce an acceptable answer. The question is whether the organisation can turn that answer into a governed action, at scale, in context, with accountability. Agentic AI intensifies the issue because it shifts the discussion from tools that assist people to systems that initiate, plan, call other systems, execute tasks and learn from feedback. That is not a software upgrade. It is a challenge to the organisation’s metabolism.

What has changed

The empirical evidence now shows two truths moving together. First, adoption has accelerated dramatically. Stanford’s 2025 AI Index reported that 78% of organisations were using AI in 2024, up from 55% the previous year, while generative AI investment continued to expand significantly. McKinsey’s 2025 State of AI survey similarly describes wider use of AI and agentic AI, but also notes that many organisations are still struggling to move from pilots to scaled economic value. The pattern is clear: AI has crossed the adoption threshold, but not yet the transformation threshold.

Second, we are learning that value does not arrive evenly. Brynjolfsson, Li and Raymond’s research on generative AI in customer support found average productivity improvements of about 14%, with the largest gains accruing to less experienced workers. Dell’Acqua and colleagues, in their study with BCG consultants, described the ‘jagged technological frontier’: AI can lift performance significantly for tasks within its frontier and degrade performance for tasks outside it. This is crucial for boards. AI is not a universal accelerator. It is a conditional accelerator. It rewards judgement, task decomposition, good data, domain context and feedback. It punishes blind delegation.

This is why so many pilots disappoint. MIT’s 2025 GenAI Divide report argued that many enterprise initiatives fail because they are brittle, poorly integrated into daily work and unable to learn from context. Deloitte’s 2025 enterprise research similarly points to rising investment alongside elusive returns. IBM’s 2025 CEO research found that rapid investment has often created disconnected technology, while IBM’s 2026 CEO research reported that 83% of CEOs believe AI success depends more on people adoption than on technology itself. The message is no longer subtle: AI transformation fails when it is treated as deployment rather than rewiring.

From digital transformation to intelligent transformation

For thirty years, business transformation was largely about digitising existing processes. We put forms online, moved workloads to cloud, integrated channels, automated back offices and introduced analytics. Much of this was valuable, but it often preserved the inherited shape of the organisation. AI is different because it changes the unit of work. It can read, reason, generate, classify, converse, code and increasingly orchestrate. In agentic form, it can become a new participant in the enterprise operating system.

That creates a dangerous temptation: to insert AI into old processes and call it transformation. A CEO with a heavily indebted technology estate should resist this. If the process is broken, AI will accelerate the brokenness. If the data is fragmented, AI will make the fragmentation visible. If accountability is unclear, AI will amplify ambiguity. If middle management has been trained to protect functional boundaries, AI will not magically create cross-enterprise flow. The organisation will merely become faster at revealing its own incoherence.

The better question is: where are the enterprise constraints that AI now makes negotiable? Which approvals exist because information used to be scarce? Which reports exist because systems could not explain themselves? Which roles exist to reconcile data that should never have been inconsistent? Which customer journeys are slow because the organisation is divided by internal functions rather than external outcomes? Which technical debt has been tolerated because the cost of change was historically too high? AI changes the cost curve of coordination, but only if leadership is willing to challenge the contracts embedded in the organisation.

What it means to build organisational speed

Organisational speed is not the same as moving quickly. Many organisations are already fast in the wrong places. They can launch pilots quickly, buy tools quickly and issue press releases quickly. The more valuable form of speed has five characteristics.

The first is speed of sense-making. Leaders need the ability to detect where AI is changing customer expectations, cost structures, risk profiles and competitive boundaries. This requires external scanning, internal telemetry and board-level fluency. A board that treats AI as a technology topic will be late; a board that treats AI as a strategic discontinuity has a chance.

The second is speed of decision. AI opportunities decay when they are trapped in committees designed for yesterday’s risk. This does not mean weakening governance. It means designing governance that is proportionate, informed and close to the work. Responsible AI, security, data protection and model assurance must be built into the delivery system, not bolted on as a final inspection.

The third is speed of learning. Organisations must move from pilot culture to learning culture. A pilot asks whether a tool works. A learning system asks what changed in the work, what was adopted by people, what risk emerged, what data improved, what should be stopped and what should scale. This is where many enterprises are weakest. They accumulate experiments without compounding knowledge.

The fourth is speed of integration. The next advantage will not come from isolated copilots. It will come from connecting models to workflows, data, controls, APIs, human review, cyber policy, auditability and business outcomes. This is where technical debt becomes strategic debt. Legacy infrastructure is not merely an IT inconvenience; it is a brake on organisational learning.

The fifth is speed of trust. People will not adopt systems they do not understand, cannot challenge or believe are being used against them. Trust is not soft. It is the lubricant of transformation. Without it, employees route around new tools, managers preserve old behaviours and the organisation creates a theatre of adoption while real work continues elsewhere.

Why the deepest transformations are about people

BCG has often framed AI value through a 10-20-70 logic: a smaller proportion lies in algorithms, more in technology and data, and the majority in people, process and change. Whether one accepts the exact numbers or not, the principle is right. The transformation is ultimately human because work is social before it is technical. Decisions are made by people, exceptions are handled by people, customers trust people, risk is owned by people and culture is transmitted by people.

The World Economic Forum’s Future of Jobs Report 2025 expects 39% of workers’ core skills to change by 2030 and estimates that, in a workforce of 100 people, 59 will need training before the end of the decade. That is not an HR footnote. It is a balance sheet issue. Skills are now a strategic asset class. The enterprise that cannot reskill quickly cannot transform quickly. The enterprise that cannot redesign roles cannot capture AI value. The enterprise that treats people as recipients of change rather than authors of change will lose the very intelligence it needs.

This is particularly true for middle management. In many large enterprises, middle managers are the translation layer between strategy and work. They can either become the accelerators of AI transformation or its immune system. If they are excluded, threatened or left untrained, they will slow the transformation in rational self-defence. If they are equipped to redesign work, coach teams, manage risk and interpret AI outputs, they become the most important agents of speed.

The same is true for frontline expertise. AI systems require context. They need to learn from the people who know where processes fail, where customers become frustrated, where data is misleading, where policies contradict reality and where exceptions actually occur. In this sense, AI does not remove the need for human intelligence; it increases the premium on human judgement. The future enterprise is not a machine with people attached. It is a human institution with new cognitive infrastructure.

The CEO agenda

For the CEO of a large multinational enterprise, the practical implications are stark. First, do not allow AI to become another layer of technical debt. Every AI investment should be tested against architecture, data lineage, cyber posture, model governance and integration into real work. Second, move from use-case enthusiasm to capability building. The question is not how many pilots are running, but whether the organisation is building reusable data products, model assurance, workflow orchestration, talent pathways and decision rights. Third, make adoption a leadership discipline. Usage statistics are not enough; measure changes in cycle time, quality, customer outcomes, employee confidence and risk controls.

Fourth, create a strategic map of what must be rewired. Some processes should be automated, some augmented, some eliminated and some protected because human judgement is the source of value. Fifth, put people at the centre without romanticising the status quo. People-led transformation does not mean avoiding difficult choices. It means making those choices with clarity, fairness, participation and investment in capability.

My own view is that AI transformation is entering its second act. The first act was about possibility. The second is about organisational character. The winners will not be the firms with the most pilots, the largest tool catalogue or the loudest AI narrative. They will be the firms that build speed without losing judgement, automate without abandoning accountability, and use AI to enlarge human agency rather than merely reduce human cost.

That is why this is a CEO issue. Technical debt, process debt and skills debt have converged. AI has made the hidden friction of the enterprise visible. The question for the board is not whether the organisation should adopt AI. That decision has already been made by the market. The question is whether the organisation can rewire itself quickly enough, wisely enough and humanely enough to turn intelligence into advantage.

Selected references

Brynjolfsson, E., Li, D. and Raymond, L. R. (2023/2025), ‘Generative AI at Work’, NBER Working Paper 31161 and Quarterly Journal of Economics.

Dell’Acqua, F. et al. (2023/2025), ‘Navigating the Jagged Technological Frontier’, Harvard Business School / Organization Science.

Deloitte (2025), The State of Generative AI in the Enterprise.

IBM Institute for Business Value (2025), CEO Study: CEOs Double Down on AI While Navigating Enterprise Hurdles.

IBM Institute for Business Value (2026), CEO Study: CEOs are Reshaping C-suite Roles for the AI Era.

McKinsey & Company (2025), The State of AI: Global Survey 2025.

MIT NANDA (2025), The GenAI Divide: State of AI in Business 2025.

Stanford HAI (2025), Artificial Intelligence Index Report 2025.

World Economic Forum (2025), The Future of Jobs Report 2025.

Source URLs consulted:

https://hai.stanford.edu/ai-index/2025-ai-index-report

https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

https://www.nber.org/papers/w31161

https://www.hbs.edu/faculty/Pages/item.aspx?num=64700

https://www.deloitte.com/uk/en/issues/generative-ai/state-of-generative-ai-in-enterprise.html

https://newsroom.ibm.com/2025-05-06-ibm-study-ceos-double-down-on-ai-while-navigating-enterprise-hurdles

https://newsroom.ibm.com/2026-05-04-ibm-study-ceos-are-reshaping-c-suite-roles-for-the-AI-era

https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf

https://www.weforum.org/publications/the-future-of-jobs-report-2025