The rush to adopt enterprise AI has left crucial safeguards behind. A series of new reports in 2025 confirms a dual threat: the machines themselves are prone to error, and the human governance needed to manage them is dangerously thin. The result is a growing, measurable risk inside the world’s biggest companies.
This is Bratislava—but it could be Boston or Berlin. An executive studies a market analysis on her screen. It arrived in minutes, not days. The text is clean. The numbers are plausible. The conclusions are bold. The report was written by an artificial intelligence agent. And she has no certain way to know if it is right.
Fragility by the Numbers
This scene, in offices around the world, is the quiet heart of a growing corporate risk. In the rush to deploy AI, the tools have outpaced the rules. A cascade of new data confirms the gap. The machines are unreliable, and the human oversight is weak.
The evidence is not anecdotal. It is empirical. A September 2025 survey from PagerDuty found that eighty-four percent of firms have already suffered an AI-related outage. Eighty-five percent of executives admit they need better ways to detect AI errors. The speed of deployment creates a new kind of fragility.
Human review, the essential safeguard, is often missing. Research from McKinsey this year shows that only twenty-seven percent of organizations check all AI-generated content before it goes out the door. A similar number reviews less than a fifth of it. The rest is a gamble.
The human element is a wild card. When official systems are lacking, employees create their own. A study by KPMG and the University of Melbourne found that forty-four percent of American workers use AI in ways their employers have not authorized. Nearly half upload sensitive company information to public tools. Fifty-eight percent rely on AI outputs without a thorough assessment. The work is simply trusted.
A Failure of Governance
This is not just a failure of technology. It is a failure of governance. The policies, the training, and the controls have not kept pace with the machine’s reach. The problem has two heads: the agent’s reliability and the organization’s discipline. They are intertwined.
The market is beginning to notice. A dispatch from Thunk.AI on September 25 warns of a coming “trough of disillusionment,” tied directly to a lack of demonstrable AI reliability. The initial promise is colliding with operational reality. Experts like Jennifer Kosar at PwC now argue that independent oversight is no longer just about mitigating risk. It is a prerequisite for achieving any return on investment.
The core truth is this: in 2025, the race to implement artificial intelligence has created a measurable, two-front problem. The outputs are not consistently trustworthy, and the human systems for checking them are not consistently present. The risk is operational. It is here. And it is measurable.