Research Finder
Find by Keyword
OpenText: Closing the Divide Between Rapid AI Adoption and Enterprise Governance
OpenText, in collaboration with the Ponemon Institute, has released a global study revealing that while AI adoption is accelerating at a breakneck pace, a critical disparity exists between deployment and the foundational security, reliability, and human-led governance required to manage emerging enterprise risks effectively.
3/27/2026
Key Highlights
While over half of businesses have integrated generative AI, the speed of deployment is dangerously outpacing the development of essential security and oversight frameworks.
Rigid or unenforceable policies are driving employees to use AI tools outside official oversight, creating a significant disconnect between corporate governance and real-world usage.
Organizations struggle with model bias and data reliability, leading 51% of respondents to doubt AI's effectiveness in critical areas like threat detection and anomaly reduction.
Due to operational errors and a lack of model autonomy, more than half of enterprises believe constant human-in-the-loop oversight is non-negotiable for safe AI governance.
CISOs must prioritize information readiness, risk-based governance, and the appointment of dedicated AI leadership to bridge the 79% maturity deficit in AI cybersecurity.
The News
OpenText released a new global report, “Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI,” developed in partnership with the Ponemon Institute. The research revealed that, while more than half of enterprises (52%) have fully or partially deployed GenAI, security and governance is falling behind. This gap highlights a growing challenge for the industry as organizations are adopting generative AI quickly, but many are doing so without the governance and security foundations needed to manage its risks. For more information, read the OpenText press release.
Analyst Take
OpenText has launched a comprehensive global study titled "Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI," produced in collaboration with the Ponemon Institute. The findings indicate that while 52% of businesses have successfully integrated generative AI into their operations, either in part or in full, their security and oversight measures are failing to keep pace.
We see this misalignment as representing a mounting industry-wide hurdle, as the velocity of AI adoption is currently outpacing the development of the defensive frameworks necessary to manage its unique risks. The report suggests that many organizations are embracing the power of generative tools without first establishing the governance and safety foundations required to protect their digital ecosystems.
From our viewpoint, the OpenText/Ponemon research highlights a specific gap where rapid adoption is occurring without the necessary governance and security foundations, and the HyperFRAME Len State of the Enterprise AI Stack 1H 2026 data provides the empirical evidence for why and how this disparity manifests. While the Ponemon study notes that security and governance are falling behind, HyperFRAME Lens research reveals a significant governance lag, showing that 37.3% of organizations have restrictive policies that are essentially unenforceable. This friction creates a disconnect where employees adopt generative tools quickly to maintain a competitive edge but do so outside of official oversight because the existing policies are seen as too rigid or impractical, leading to the rise of shadow AI.
This tension is further reinforced by the overwhelming anticipation for agentic AI, which currently sees a massive 79.2% adoption interest among respondents. This high demand for autonomous capabilities, when paired with the reality that fewer than one in five organizations possess truly AI-ready architectures, sharply illustrates the Ponemon finding that the industry is moving significantly faster than its foundational safeguards can support. Consequently, the rush toward execution is bypassing the structural readiness required to manage the very risks these organizations are now beginning to face.
The AI Governance Gap: Rapid Deployment Outpaces Cybersecurity and Risk Frameworks
The latest survey results reveal a significant disparity between the rapid pace of AI deployment and the implementation of necessary governance and security protocols. Currently, nearly 80% of organizations have yet to reach full AI maturity in cybersecurity, leaving them without fully deployed systems or comprehensive risk assessments. This lack of readiness is further evidenced by the fact that only 41% of organizations have established AI-specific data privacy policies. This regulatory gap is particularly concerning given that 59% of respondents report that AI has actually made it more difficult to comply with existing privacy and security mandates.
The research also highlights the technical and ethical challenges inherent in managing modern language models. A substantial 62% of respondents struggle to minimize model and bias risks, while 58% find it extremely difficult to mitigate input risks such as misleading or harmful responses. Furthermore, over half of those surveyed report difficulty in managing user-related risks, including the accidental spread of misinformation. Despite these mounting pressures, fewer than half of organizations have adopted a formal, risk-based AI governance approach to address the security threats and ethical issues currently undermining enterprise trust.
The Trust Gap: Navigating Reliability and Bias in AI Security Operations
While many organizations are deploying AI to bolster efficiency within security operations, significant hurdles regarding trust, reliability, and explainability are currently limiting the effectiveness and autonomy of these tools. A primary concern lies in threat detection, where persistent bias and reliability risks hinder performance; currently, only 51% of respondents believe AI is effective at reducing the time required to detect anomalies or emerging threats. Furthermore, fewer than half rate these systems as effective for deep threat hunting or reducing manual workloads, largely because nearly two-thirds of organizations find it extremely difficult to minimize model bias and discriminatory outputs.
Operational reliability presents an additional barrier, with nearly half of respondents citing errors in AI decision rules and data inputs as major obstacles to success. This lack of consistency explains why fully autonomous AI remains out of reach for most, as only 47% of organizations believe their models can learn robust norms and make safe decisions independently. Consequently, more than half of enterprises maintain that constant human oversight is a non-negotiable component of AI governance, especially as cyber attackers continue to adapt their tactics at a pace that outstrips current automated defenses.
We believe that OpenText should prioritize productizing information readiness by positioning its AI portfolio as a direct solution to the data input errors cited by 40% of respondents. By integrating automated data cleansing and validation tools directly into its Cybersecurity Aviator platform, the company can ensure higher reliability for its users. Additionally, to address the 62% of organizations currently struggling with model bias, OpenText should embed transparent governance through explainability dashboards and risk-based frameworks, allowing CISOs to verify AI decision rules in real-time.
OpenText should champion human-in-the-loop workflows by designing its autonomous tools to act as force multipliers that prioritize high-intent alerts for human analysts rather than attempting full, unsupervised autonomy. Given that over half of enterprises view human oversight as non-negotiable, this approach aligns product capabilities with current market trust levels. Moreover, the company can lead the market by establishing a Trusted AI certification, providing a standardized compliance and privacy framework that helps the 59% of struggling organizations align their generative AI deployments with evolving global security regulations.
Looking Ahead
We believe decision makers should view these findings as a clear signal that the rapid adoption of AI is currently outstripping the necessary governance and security foundations. To bridge this gap, CISOs should prioritize the implementation of AI-specific data privacy policies and risk-based governance frameworks to address the 79% maturity deficit reported. Security leaders must also mandate continuous human oversight in SOC workflows to mitigate the persistent risks of model bias and operational errors in decision rules.
To improve reliability, IT teams should invest in data cleansing and information readiness initiatives to ensure that the data ingested by AI models is accurate and consistent. Furthermore, decision makers should shift their focus from pure automation to explainability, ensuring that AI outputs are transparent enough for analysts to trust and act upon. Finally, organizations should consider appointing dedicated leadership, such as a Chief AI Officer, to align technical AI deployments with broader enterprise security and compliance goals
Ron Westfall | VP and Practice Leader for Infrastructure and Networking
Ron Westfall is a prominent analyst figure in technology and business transformation. Recognized as a Top 20 Analyst by AR Insights and a Tech Target contributor, his insights are featured in major media such as CNBC, Schwab Network, and NMG Media.
His expertise covers transformative fields such as Hybrid Cloud, AI Networking, Security Infrastructure, Edge Cloud Computing, Wireline/Wireless Connectivity, and 5G-IoT. Ron bridges the gap between C-suite strategic goals and the practical needs of end users and partners, driving technology ROI for leading organizations.