AI agents are rapidly entering enterprise environments. However, early deployments are already causing operational failures and exposing a deeper, largely overlooked challenge: poor governance. Organisations are facing a wave of “well-intentioned disasters,” in which AI agents take autonomous actions that unintentionally damage systems or data, often while trying to achieve the objective they were programmed for. A recent incident involving Replit, where an AI coding agent deleted an entire corporate codebase, has become a defining example of the risks.
Gupta warns that these failures will multiply as companies adopt hundreds of agents across workflows. While vendors are introducing tools such as Rubrik’s new Agent Rewind — designed to evaluate and reverse incorrect agent actions — she emphasises that the real bottleneck lies much earlier: the “zero-day governance issue.”
Unlike traditional zero-day flaws in cybersecurity, this refers to the decisions organisations must make before an agent is even deployed:
- What data can the agent access?
- What guardrails define acceptable behaviour?
- How will the organisation measure safe versus unsafe outcomes?
CISOs, in particular, are raising red flags about data exposure, visibility, and control. Despite these blockers, Gupta says “FOMO is pushing enterprises forward.” Startups already use agentic AI to scale engineering output dramatically, raising competitive pressure across industries. But no company has yet “cracked the code” for safe, scalable agent deployment.
Gupta expects adoption to accelerate in the next 6–12 months as organisations cycle through early failures, strengthen governance frameworks, and adopt safer operational patterns. For now, the message is clear: AI agents can deliver value — but only if governance becomes the first, not last, step.
Source:

