The question is no longer whether AI can generate code. It clearly can. The real question is whether “vibe coded” products can be trusted, governed and secured well enough to be taken seriously inside an enterprise.
Over the past year, tools such as Claude, OpenAI, Gemini and others have dramatically lowered the barrier to software creation. What many are now calling vibe coding allows founders, product teams and even non-engineers to produce working applications at remarkable speed. Prototypes that once took months can now appear in hours. That is genuinely transformative.
But it also creates a dangerous illusion. The ability to generate software quickly is not the same as the ability to create software that is secure, resilient, compliant and enterprise ready. In fact, the faster code is created, the more important governance becomes. The risk is not that AI-generated code fails to compile. The risk is that it appears to work while hiding weaknesses that only emerge later under attack, under regulation, or under enterprise scrutiny.
Where the problem begins
This is where vibe coding may hit the rocks. Not because the model cannot write code, but because code alone is only one small part of software assurance. Enterprise-grade products require secure architecture, identity controls, dependency management, auditability, testing discipline, provenance, data governance, model risk controls, human accountability and clear operational ownership. None of that is guaranteed simply because an AI assistant can generate a neat application layer.
Global best practice is already pointing in this direction. NIST’s Secure Software Development Framework profile for generative AI makes clear that AI-assisted development still requires disciplined secure development, validation and supply-chain control. The Open Worldwide Application Security Project (OWASP’s) work on LLM application risk highlights issues such as prompt injection, insecure output handling, data leakage and supply-chain vulnerabilities. The UK’s guidance on secure AI system development and its recent Software Security Code of Practice push the same message: security must be designed in, not bolted on afterwards.
That matters commercially. A great many AI-generated products and services being built today are exciting, useful and investable at the prototype stage, but they are not yet enterprise ready in the full sense of the term. They may lack code provenance, robust access control, explainable governance, secure deployment patterns, red-team testing, policy enforcement and evidence that they can survive procurement due diligence. In other words, there is a widening gap between AI-enabled software creation and enterprise-grade software assurance.
Why S2aaS could matter
That gap is precisely where an opportunity emerges. I believe there is a growing market for a Secure Software as a Service model — S2aaS — sitting above or alongside the current generation of agentic and SaaS platforms. The proposition would not simply be to host software, nor merely to generate it faster, but to wrap AI-enabled product development in a governed, continuously monitored, policy-driven security and assurance layer. This would include secure coding controls, architectural review, software bill of materials, vulnerability scanning, secrets management, model governance, compliance mapping, runtime monitoring and board-level assurance reporting.
In practical terms, S2aaS could become the trust fabric for the vibe coding economy. Start-ups could build at speed, but within a managed security and governance envelope. Mid-sized firms could adopt AI-generated internal tools without carrying the full burden of building a mature software assurance capability themselves. Large enterprises could accelerate innovation while retaining procurement-grade evidence, audit trails and risk visibility. Regulators and boards would be more likely to support innovation if they can see that clear control frameworks exist around it.
Beyond Agentic AI versus SaaS
This is also why the debate between Agentic AI and traditional SaaS may be missing a deeper point. The next battleground may not simply be who automates more work. It may be who can deliver trusted automation at scale. In that world, S2aaS starts to look less like a niche service and more like SaaS 2.0: software delivery fused with security, governance, compliance and assurance by design.
My conclusion
My conclusion is therefore straightforward. Vibe coding is real, powerful and economically important. But on its own it is not enough for serious enterprise deployment. The winners in the next phase of the market may not be those who generate the most code the fastest. They may be the organisations that make AI-generated software trustworthy, governable and insurable. That is where value migrates once the first excitement fades.
So yes, I believe there is an opportunity here. The space between AI-generated software and enterprise trust is not a minor implementation issue. It is a strategic market gap. And for advanced security and governance organisations prepared to package that capability as a service, S2aaS could prove to be one of the most important commercial categories to emerge from the age of AI-assisted software development.
Reference points informing the argument
• NIST SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (2024).
• NIST AI Risk Management Framework (AI RMF).
• OWASP Top 10 for LLM Applications 2025.
• NCSC / CISA / partner agencies: Guidelines for Secure AI System Development.
• UK Government, Code of Practice for the Cyber Security of AI (2025).
• UK Government, Software Security Code of Practice (2026).
• European Commission, General-Purpose AI Code of Practice (2025).
